<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community</title>
    <description>The most recent home feed on DEV Community.</description>
    <link>https://dev.to</link>
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed"/>
    <language>en</language>
    <item>
      <title>Stop Flickering UIs: Handle Subdomains with Next.js Edge Middleware ⚡</title>
      <dc:creator>Prajapati Paresh</dc:creator>
      <pubDate>Thu, 23 Apr 2026 06:00:10 +0000</pubDate>
      <link>https://dev.to/iprajapatiparesh/stop-flickering-uis-handle-subdomains-with-nextjs-edge-middleware-egn</link>
      <guid>https://dev.to/iprajapatiparesh/stop-flickering-uis-handle-subdomains-with-nextjs-edge-middleware-egn</guid>
      <description>&lt;h2&gt;The Challenge of Dynamic Subdomains&lt;/h2&gt;

&lt;p&gt;When scaling a professional B2B SaaS platform at Smart Tech Devs, providing clients with their own custom subdomains (e.g., &lt;code&gt;acme.smarttechdevs.in&lt;/code&gt;) is a premium feature that instantly elevates your brand. However, architecting this on the frontend can be complex. In a monolithic React app, you might find yourself writing messy `useEffect` hooks to parse the &lt;code&gt;window.location.host&lt;/code&gt; and render different components based on the URL.&lt;/p&gt;

&lt;p&gt;This approach is slow, causes screen flickering, and ruins SEO. To build enterprise-grade routing, we must intercept the request &lt;em&gt;before&lt;/em&gt; it ever reaches the React rendering engine. We do this using &lt;strong&gt;Next.js Edge Middleware&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;The Power of the Edge&lt;/h2&gt;

&lt;p&gt;Next.js Middleware runs on the Edge (e.g., Vercel's Edge Network), meaning the code executes in milliseconds at a server node geographically closest to the user. It intercepts the incoming HTTP request, inspects the headers, and transparently rewrites the URL to the correct internal Next.js route without changing the URL shown in the user's browser.&lt;/p&gt;

&lt;h3&gt;Step 1: Architecting the App Router Directory&lt;/h3&gt;

&lt;p&gt;First, we organize our Next.js &lt;code&gt;app/&lt;/code&gt; directory to handle the dynamic rewrites. We create a dynamic route segment specifically for subdomains.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
app/
├── (public)/         # Your main marketing site (smarttechdevs.in)
│   └── page.tsx
├── [tenant]/         # The hidden folder where subdomain traffic is routed
│   └── dashboard/
│       └── page.tsx
└── middleware.ts     # The Edge routing engine
&lt;/code&gt;&lt;/pre&gt;

&lt;h3&gt;Step 2: The Middleware Logic&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;middleware.ts&lt;/code&gt; file sits at the root of your project. Here, we parse the host header, determine if it is a subdomain, and rewrite the URL transparently to our hidden &lt;code&gt;[tenant]&lt;/code&gt; folder.&lt;/p&gt;

&lt;pre&gt;&lt;code&gt;
// middleware.ts
import { NextResponse } from 'next/server';
import type { NextRequest } from 'next/server';

export const config = {
    // Only run middleware on relevant paths (ignore static files, API, etc.)
    matcher: [
        "/((?!api/|_next/|_static/|_vercel|[\\w-]+\\.\\w+).*)",
    ],
};

export default async function middleware(req: NextRequest) {
    const url = req.nextUrl;
    
    // 1. Get hostname (e.g., 'acme.smarttechdevs.in' or 'localhost:3000')
    const hostname = req.headers.get('host') || 'smarttechdevs.in';

    // 2. Define your root domains to exclude them from subdomain logic
    const isRootDomain = hostname === 'smarttechdevs.in' || hostname === 'localhost:3000';

    // 3. Extract the subdomain (if it exists)
    // 'acme.smarttechdevs.in' -&amp;gt; 'acme'
    const subdomain = hostname.replace(`.smarttechdevs.in`, '');

    if (!isRootDomain &amp;amp;&amp;amp; subdomain) {
        // 4. Transparent Rewrite: 
        // The user sees 'acme.smarttechdevs.in/dashboard'
        // Next.js internally renders 'app/[tenant]/dashboard/page.tsx' where [tenant] = 'acme'
        return NextResponse.rewrite(new URL(`/${subdomain}${url.pathname}`, req.url));
    }

    // Pass through normal root domain requests untouched
    return NextResponse.next();
}
&lt;/code&gt;&lt;/pre&gt;

&lt;h2&gt;The Engineering ROI&lt;/h2&gt;

&lt;p&gt;Using Edge Middleware for subdomain routing fundamentally upgrades your SaaS architecture:&lt;/p&gt;

&lt;ul&gt;
    &lt;li&gt;
&lt;strong&gt;Zero Layout Shift:&lt;/strong&gt; Because the routing decision happens on the server edge, the user never sees a flash of the wrong layout or a loading spinner while the app figures out who they are.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Blazing Fast Verification:&lt;/strong&gt; You can also inject JWT authentication verification directly into this middleware, rejecting unauthorized users before they consume any React server rendering resources.&lt;/li&gt;
    &lt;li&gt;
&lt;strong&gt;Clean Codebase:&lt;/strong&gt; Your React components remain pure. They don't have to parse URLs; they simply receive the &lt;code&gt;tenant&lt;/code&gt; parameter cleanly as a prop from the dynamic folder structure.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;Conclusion&lt;/h2&gt;

&lt;p&gt;Dynamic subdomains require a dynamic routing strategy. By pushing the heavy lifting of domain parsing to Next.js Edge Middleware, you ensure that your B2B clients receive a perfectly tailored, blazingly fast experience from the very first byte of the page load.&lt;/p&gt;

</description>
      <category>nextjs</category>
      <category>react</category>
      <category>saas</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Fonctionnement d’une blockchain - Étape 5/8 : Lien “maillon - maillon”</title>
      <dc:creator>Amel In Tech</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:51:45 +0000</pubDate>
      <link>https://dev.to/amel_in_tech/fonctionnement-dune-blockchain-etape-58-lien-maillon-maillon-55il</link>
      <guid>https://dev.to/amel_in_tech/fonctionnement-dune-blockchain-etape-58-lien-maillon-maillon-55il</guid>
      <description>&lt;p&gt;Chaque bloc &lt;strong&gt;pointe&lt;/strong&gt; (par hash) vers le &lt;strong&gt;bloc précédent&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Ce chaînage rend l’historique &lt;strong&gt;difficile à falsifier&lt;/strong&gt; car modifier un ancien bloc casserait tous les hashes suivants.&lt;/p&gt;

&lt;h3&gt;
  
  
  Qu’est-ce qu’un &lt;em&gt;hash&lt;/em&gt; ?
&lt;/h3&gt;

&lt;p&gt;Un &lt;strong&gt;hash&lt;/strong&gt; est une empreinte digitale unique créée à partir d’une information.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Une même donnée donne toujours le même hash
&lt;/li&gt;
&lt;li&gt;Une seule petite modification donne un hash complètement différent
&lt;/li&gt;
&lt;li&gt;Impossible de retrouver les données à partir du hash (fonction à sens unique)
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Chaque bloc contient :&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;son propre hash
&lt;/li&gt;
&lt;li&gt;le hash du bloc précédent c'est ce qui crée une &lt;strong&gt;chaîne incassable&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjld0ead4hanw6dnv2grl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjld0ead4hanw6dnv2grl.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Comment un hash est créé ?
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;On prend les données du bloc (transactions, date, racine Merkle, etc.)&lt;/li&gt;
&lt;li&gt;On les fait passer dans une &lt;strong&gt;fonction de hachage cryptographique&lt;/strong&gt; (ex : SHA-256 pour Bitcoin).&lt;/li&gt;
&lt;li&gt;On obtient une empreinte unique.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>beginners</category>
      <category>blockchain</category>
      <category>computerscience</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Building MCP Servers in Python: a production primer for 2026</title>
      <dc:creator>Tufail Khan</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:49:09 +0000</pubDate>
      <link>https://dev.to/tufailkhan457/building-mcp-servers-in-python-a-production-primer-for-2026-4kh2</link>
      <guid>https://dev.to/tufailkhan457/building-mcp-servers-in-python-a-production-primer-for-2026-4kh2</guid>
      <description>&lt;p&gt;The Model Context Protocol (MCP) went from "Anthropic side project" to &lt;strong&gt;industry standard&lt;/strong&gt; in eighteen months. As of March 2026, MCP SDKs are pulling &lt;strong&gt;97 million monthly downloads&lt;/strong&gt;. Every serious agent framework — Claude, Cursor, OpenAI Agents SDK, Microsoft Agent Framework — speaks MCP natively.&lt;/p&gt;

&lt;p&gt;If you're a Python backend engineer, MCP is the most leveraged thing you can learn right now. This post is a practical walkthrough of shipping a production-grade MCP server using &lt;strong&gt;FastMCP&lt;/strong&gt;, the Python framework that makes it boring.&lt;/p&gt;

&lt;h2&gt;
  
  
  What MCP actually is
&lt;/h2&gt;

&lt;p&gt;MCP is a protocol for exposing &lt;strong&gt;tools&lt;/strong&gt;, &lt;strong&gt;resources&lt;/strong&gt;, and &lt;strong&gt;prompts&lt;/strong&gt; to an AI agent in a standardized way. Instead of each agent framework inventing its own adapter format, you write your server once and it plugs into any MCP-compatible client.&lt;/p&gt;

&lt;p&gt;Think of it as &lt;strong&gt;"USB-C for agents."&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A minimal server exposes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Tools&lt;/strong&gt; — functions the agent can call (e.g. &lt;code&gt;search_customers&lt;/code&gt;, &lt;code&gt;get_order_status&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resources&lt;/strong&gt; — URIs the agent can read (e.g. &lt;code&gt;crm://contacts/123&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Prompts&lt;/strong&gt; — parameterized prompt templates&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Starter: a FastMCP server in 40 lines
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# server.py
&lt;/span&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastmcp&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastMCP&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;pydantic&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;BaseModel&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;

&lt;span class="n"&gt;mcp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;FastMCP&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;internal-crm&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;BaseModel&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="nb"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;tier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;
    &lt;span class="n"&gt;mrr&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;float&lt;/span&gt;

&lt;span class="nd"&gt;@mcp.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;search_customers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tier&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt; &lt;span class="o"&gt;|&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;list&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;]:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Search the CRM for customers by name or email. Optionally filter by tier.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AsyncClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
            &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crm.internal/api/search&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="n"&gt;params&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;q&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;query&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;tier&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;tier&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nc"&gt;Customer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;row&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;row&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;json&lt;/span&gt;&lt;span class="p"&gt;()]&lt;/span&gt;

&lt;span class="nd"&gt;@mcp.tool&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;get_customer_notes&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Fetch the latest account-manager notes for a customer.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AsyncClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crm.internal/api/notes/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

&lt;span class="nd"&gt;@mcp.resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;crm://customer/{customer_id}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;customer_resource&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="nb"&gt;str&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Read-only customer profile.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;async&lt;/span&gt; &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="n"&gt;httpx&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;AsyncClient&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;r&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://crm.internal/api/customer/&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;customer_id&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;r&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;text&lt;/span&gt;

&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;__name__&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;__main__&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;run&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;transport&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;streamable-http&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;host&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;0.0.0.0&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;port&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8000&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's a complete, production-adjacent MCP server. Type-safe inputs and outputs via Pydantic. Docstrings become tool descriptions the agent reads. Resources get URIs the agent can embed in its context.&lt;/p&gt;

&lt;h2&gt;
  
  
  The transport shift: stdio → Streamable HTTP
&lt;/h2&gt;

&lt;p&gt;Every MCP tutorial from 2024 used &lt;code&gt;stdio&lt;/code&gt; transport — the server runs as a subprocess, the agent pipes JSON-RPC over stdin/stdout. That's fine for desktop tools like Claude Desktop. It's the wrong answer for production.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Streamable HTTP&lt;/strong&gt; (finalized in the 2025 spec) fixes this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Servers run as long-lived HTTP services, not per-invocation subprocesses&lt;/li&gt;
&lt;li&gt;Scale horizontally behind a load balancer&lt;/li&gt;
&lt;li&gt;Share across teams and apps&lt;/li&gt;
&lt;li&gt;Deploy once, discover via URL&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In FastMCP, the switch is one line: &lt;code&gt;transport="streamable-http"&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Auth: OAuth 2.1 the boring way
&lt;/h2&gt;

&lt;p&gt;MCP's 2025 spec added OAuth 2.1 as the standard auth mechanism. You don't roll your own. FastMCP ships with OAuth middleware that plugs into your existing IdP (Auth0, Okta, Cognito, Clerk, etc.):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;fastmcp.auth&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OAuth2Middleware&lt;/span&gt;

&lt;span class="n"&gt;mcp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;add_middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nc"&gt;OAuth2Middleware&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;issuer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://tufail.auth0.com/&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;audience&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mcp-internal-crm&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;required_scope&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;crm:read&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;))&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent handles the authorization dance. Your server just enforces scopes on each tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  Deploying to AWS without overspending
&lt;/h2&gt;

&lt;p&gt;Two patterns we've landed on for production MCP:&lt;/p&gt;

&lt;h3&gt;
  
  
  Pattern A — Low-traffic internal tools: &lt;strong&gt;Lambda + API Gateway&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Use &lt;code&gt;mangum&lt;/code&gt; or FastMCP's ASGI adapter to run inside Lambda&lt;/li&gt;
&lt;li&gt;Cold starts ~300-500ms (acceptable for human-speed agent interactions)&lt;/li&gt;
&lt;li&gt;Cost: near-zero when idle&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Pattern B — High-traffic shared servers: &lt;strong&gt;ECS Fargate behind ALB&lt;/strong&gt;
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;One service per logical server&lt;/li&gt;
&lt;li&gt;Auto-scale on CPU/memory&lt;/li&gt;
&lt;li&gt;Pair with ElastiCache for stateful session continuity&lt;/li&gt;
&lt;li&gt;Cost: predictable, ~\$30/mo for a small always-on service&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The mistake we made early on: treating every MCP server like it needed an always-on Fargate task. For servers that handle &amp;lt;10 agent calls/hour, Lambda is dramatically cheaper.&lt;/p&gt;

&lt;h2&gt;
  
  
  What to expose — and what not to
&lt;/h2&gt;

&lt;p&gt;The #1 mistake I see is devs exposing their entire internal API as MCP tools. &lt;strong&gt;Don't.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Good MCP servers are &lt;em&gt;curated&lt;/em&gt; for an agent's use case. Ask: what would a smart human operator need to do their job? Expose &lt;em&gt;those&lt;/em&gt; 5-15 tools. Not your 300-endpoint API.&lt;/p&gt;

&lt;p&gt;Good tool design:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;One clear job per tool.&lt;/strong&gt; &lt;code&gt;search_customers&lt;/code&gt; not &lt;code&gt;crm_unified_query&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Typed inputs and outputs.&lt;/strong&gt; Pydantic makes this cheap.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Honest docstrings.&lt;/strong&gt; The agent reads them. Lie in the docstring and the agent will confidently call your tool wrong.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Idempotent where possible.&lt;/strong&gt; Agents retry. Accept that.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;p&gt;Remote MCP servers + fine-grained OAuth scopes are unlocking internal-AI-assistant work that was impossible a year ago. If you're a Python backend engineer and you haven't shipped an MCP server yet, pick your highest-leverage internal system and wrap it. You'll be surprised how quickly it changes how your team works.&lt;/p&gt;

</description>
      <category>mcp</category>
      <category>python</category>
      <category>claude</category>
      <category>agentic</category>
    </item>
    <item>
      <title>I Built a Local AI VRAM Calculator &amp; GPU Planner (Beta)</title>
      <dc:creator>logarithmicspirals</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:39:59 +0000</pubDate>
      <link>https://dev.to/logarithmicspirals/i-built-a-local-ai-vram-calculator-gpu-planner-beta-g7b</link>
      <guid>https://dev.to/logarithmicspirals/i-built-a-local-ai-vram-calculator-gpu-planner-beta-g7b</guid>
      <description>&lt;p&gt;I added a new tool to the site: the &lt;a href="https://dev.to/tools/local-ai-vram-calculator-gpu-planner/"&gt;Local AI VRAM Calculator &amp;amp; GPU Planner (Beta)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;This came out of a problem I kept running into while experimenting with local models. It is surprisingly easy to end up with a setup that technically works, but does not actually match the kind of workloads you want to run.&lt;/p&gt;

&lt;p&gt;This is mainly aimed at figuring out GPU and VRAM requirements for running local LLMs. Most advice in this space is broadly correct, but not very specific. Things like “get more VRAM” or “use NVIDIA” help at a high level, but they do not help much when you are comparing a curated set of real GPUs, a manual VRAM tier, different quantization levels, or larger context windows.&lt;/p&gt;

&lt;p&gt;I wanted something that made those tradeoffs visible before committing to hardware.&lt;/p&gt;

&lt;h2&gt;
  
  
  What the Planner Does
&lt;/h2&gt;

&lt;p&gt;The Local AI VRAM Calculator &amp;amp; GPU Planner (Beta) takes a few inputs: a GPU from the site snapshot or a manual VRAM tier, system RAM, quantization level, context length, and the primary workload.&lt;/p&gt;

&lt;p&gt;From there it tries to give a practical read on whether the setup makes sense. That includes a rough fit score, GPU-specific notes when you pick a card from the snapshot, and a set of model recommendations based on the selected workload.&lt;/p&gt;

&lt;p&gt;The part I focused on the most is the estimate breakdown. Instead of showing a single number, the planner separates the estimate into model weights, KV cache, runtime overhead, total VRAM, and storage.&lt;/p&gt;

&lt;p&gt;That makes it easier to see what actually changes when you adjust something like context length or quantization. In a lot of cases, the bottleneck is not where you expect.&lt;/p&gt;

&lt;p&gt;The estimates are not meant to be exact. Some are configuration-based, others are heuristic. The tool tries to label that clearly so it is obvious how much confidence to put in each result.&lt;/p&gt;

&lt;p&gt;The context selector is also capped by the model metadata currently loaded into the planner. In practice that means the available maximum is based on the current curated model snapshot, plus any public Hugging Face models you import into the tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Much VRAM Do You Need for Local LLMs?
&lt;/h2&gt;

&lt;p&gt;This is the question I kept coming back to, and it is harder to answer than it should be.&lt;/p&gt;

&lt;p&gt;As a rough guideline:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Smaller models (7B–8B) can often run on 8–12 GB of VRAM with quantization.&lt;/li&gt;
&lt;li&gt;13B–14B models typically need around 12–16 GB.&lt;/li&gt;
&lt;li&gt;Larger models usually require 24 GB or more, or some form of offloading.&lt;/li&gt;
&lt;li&gt;Context length increases memory usage, sometimes more than expected.&lt;/li&gt;
&lt;li&gt;Runtime overhead and KV cache can add a meaningful amount on top of raw model size.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are not strict rules, but they are useful for avoiding obviously bad configurations.&lt;/p&gt;

&lt;p&gt;The planner is meant to make these tradeoffs visible. Instead of guessing whether a model will fit, you can see how each component contributes to total VRAM usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why This Is Single GPU Only
&lt;/h2&gt;

&lt;p&gt;I originally added a multi-GPU option and removed it.&lt;/p&gt;

&lt;p&gt;In practice, two GPUs do not behave like one larger pool of VRAM. Some runtimes can split work across devices, but many workflows still depend on the model fitting mostly on a single card. Performance also depends on details that are hard to generalize, like backend support and interconnect behavior.&lt;/p&gt;

&lt;p&gt;Given that, a single-GPU estimate felt more honest. If a setup does not make sense on one card, the tool should not imply that adding another card will automatically fix it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Fits
&lt;/h2&gt;

&lt;p&gt;I wrote previously about using &lt;a href="https://dev.to/blog/using-tailscale-to-access-private-llms/"&gt;Tailscale to access private LLMs&lt;/a&gt;, which focuses on the networking side of running local models.&lt;/p&gt;

&lt;p&gt;This tool is more about the step before that: deciding what kind of hardware and model setup is actually reasonable.&lt;/p&gt;

&lt;p&gt;In practice, both pieces are part of the same system. Running local LLMs ends up touching hardware, storage, networking, and a few operational decisions that are easy to overlook at the start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It
&lt;/h2&gt;

&lt;p&gt;If you are trying to figure out whether your GPU can run a specific LLM, you can try it here: &lt;a href="https://dev.to/tools/local-ai-vram-calculator-gpu-planner/"&gt;Local AI VRAM Calculator &amp;amp; GPU Planner (Beta)&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is not a benchmark or a guarantee of how every runtime will behave. It is closer to a planning tool: something that makes the constraints visible and helps avoid obviously bad decisions within the GPU snapshot and model data the site currently ships.&lt;/p&gt;

&lt;p&gt;I will keep updating the underlying data as I test more setups.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>gpu</category>
      <category>ollama</category>
      <category>homenetwork</category>
    </item>
    <item>
      <title>Shadow Deployments for AI Agents: Test in Prod without breaking anything 🚀</title>
      <dc:creator>Amar Dhillon</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:39:39 +0000</pubDate>
      <link>https://dev.to/amarjit_dhillon/shadow-deployments-for-ai-agents-test-in-production-without-breaking-anything-55e2</link>
      <guid>https://dev.to/amarjit_dhillon/shadow-deployments-for-ai-agents-test-in-production-without-breaking-anything-55e2</guid>
      <description>&lt;p&gt;If you’ve worked with AI agents in production, you already know one thing. Deploying a new version is not the same as deploying traditional software&lt;/p&gt;

&lt;p&gt;With non AI systems, you push code and then run tests. If everything looks fine then you go live&lt;/p&gt;

&lt;p&gt;With agents, things get messy. The same input can produce slightly different outputs. Improvements in reasoning might come with unexpected side effects. Sometimes a “better” model performs worse in edge cases that actually matter&lt;/p&gt;

&lt;p&gt;So the real challenge is not building a better agent. The challenge is &lt;strong&gt;proving that it’s better before users see it&lt;/strong&gt; 🔍&lt;/p&gt;




&lt;h3&gt;
  
  
  Why Traditional Deployment Fails for Agents 🤔
&lt;/h3&gt;

&lt;p&gt;The core issue is that &lt;em&gt;agent behavior is not deterministic&lt;/em&gt;. You can’t rely on a handful of test cases and assume production will behave the same way. Even if your &lt;em&gt;offline evaluations&lt;/em&gt; look great then real users can bring unpredictable inputs, messy context and ambiguous intent&lt;/p&gt;

&lt;p&gt;This means a direct rollout is risky. If something goes wrong, it’s not always obvious. it can give:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Slightly worse answers&lt;/li&gt;
&lt;li&gt;Slightly more hallucinations&lt;/li&gt;
&lt;li&gt;Slightly longer responses that annoy users&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By the time you notice, the damage is already done 😬&lt;/p&gt;




&lt;h3&gt;
  
  
  The Idea Behind Shadow Deployments 🧠
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckjjow3lrhh14r6x8pv5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fckjjow3lrhh14r6x8pv5.png" alt=" " width="701" height="804"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in the above diagram, instead of replacing your current agent (V1) you run the new version (V2) alongside it&lt;/p&gt;

&lt;p&gt;The user sends a request and your system (Orchestrator in this case) does something interesting behind the scenes&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;stable agent&lt;/code&gt; handles the request as usual and returns the response to the user&lt;/li&gt;
&lt;li&gt;At the same time, the &lt;code&gt;new agent (V2)&lt;/code&gt; receives the exact same input but its output is never shown to the user. It just runs quietly in the background 🏃🏻‍♂️&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is what I call a &lt;strong&gt;shadow path&lt;/strong&gt; 👻&lt;/p&gt;

&lt;p&gt;You are effectively replaying real production traffic through your new agent without exposing any risk. The _user experience _remains unchanged but you now have a way to observe how the &lt;code&gt;new version&lt;/code&gt; behaves under real conditions&lt;/p&gt;




&lt;h3&gt;
  
  
  What Actually Happens Under the Hood? ⚙️
&lt;/h3&gt;

&lt;p&gt;At the center of this setup is an orchestrator; It takes incoming requests and sends them down 2 paths&lt;/p&gt;

&lt;p&gt;The first path is the &lt;em&gt;live path&lt;/em&gt;, which goes to your &lt;code&gt;stable agent&lt;/code&gt;. This is the version you trust. It produces the response that the user sees&lt;/p&gt;

&lt;p&gt;The second path is the &lt;em&gt;shadow path&lt;/em&gt;. This goes to your &lt;code&gt;canary agent&lt;/code&gt; which is the version you’re testing. It receives the same input often with the** same context and knowledge sources** but its output is held back&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Its important to note that, to make this comparison meaningful, both agents typically rely on the &lt;strong&gt;same knowledge base.&lt;/strong&gt; If one agent had access to different data, you wouldn’t know whether the difference in output came from better reasoning or just better information. Keeping the data layer consistent ensures you are comparing apples to apples 🍎&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Comparing Outputs Is Where the Magic Happens ⚖️
&lt;/h3&gt;

&lt;p&gt;Now comes the tricky part. How do you decide which output is better?&lt;/p&gt;

&lt;p&gt;You could try to define strict rules, but language is messy. Quality is subjective. What looks better to one evaluator might not look better to another&lt;/p&gt;

&lt;p&gt;This is where the idea of using an &lt;strong&gt;LLM-as-a-judge&lt;/strong&gt; comes in. A &lt;em&gt;reasoning model&lt;/em&gt; can evaluate both responses and decide which one is more accurate or more aligned with the user’s intent&lt;/p&gt;

&lt;p&gt;Over time, you start collecting signals&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Maybe the new agent wins 65% of the time&lt;/li&gt;
&lt;li&gt;Maybe it’s more accurate but slightly slower&lt;/li&gt;
&lt;li&gt;Maybe it handles complex queries better but struggles with short factual ones&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of this gets logged and analyzed 📊&lt;/p&gt;




&lt;h3&gt;
  
  
  Turning Observations Into Decisions 🔁
&lt;/h3&gt;

&lt;p&gt;After running this setup for a while, patterns begin to emerge. You can see latency differences, cost implications and even qualitative improvements in reasoning.&lt;/p&gt;

&lt;p&gt;At this point, promoting the canary is no longer a risky move;It becomes a controlled decision&lt;/p&gt;

&lt;p&gt;If the new agent consistently performs better and meets your criteria, you promote it to production. &lt;strong&gt;The canary becomes the new stable version and the cycle continues&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Things That Still Need Careful Thought ⚠️
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Shadow deployments are powerful but they are not free&lt;/strong&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Running two agents in parallel increases cost, so many teams sample traffic instead of shadowing everything&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Latency also needs to be isolated so the shadow path never slows down the user response&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Evaluation quality is another challenge. LLM-as-a-judge works well, but it can be inconsistent. Many teams improve this by combining automated evaluation with occasional human review&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Observability becomes critical. You need to track inputs, outputs, context, and decisions in a structured way. Without that, you are just collecting noise&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  The Bigger Picture 🧩
&lt;/h3&gt;

&lt;p&gt;If you are serious about building production-grade AI agents this is &lt;strong&gt;not just a nice-to-have pattern&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It’s one of the foundational pieces that makes everything else possible 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agentskills</category>
      <category>aws</category>
      <category>agents</category>
    </item>
    <item>
      <title>Why Technical Startups Fail: Building in a Vacuum</title>
      <dc:creator>Matthew Gladding</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:35:19 +0000</pubDate>
      <link>https://dev.to/glad_labs/why-technical-startups-fail-building-in-a-vacuum-l2g</link>
      <guid>https://dev.to/glad_labs/why-technical-startups-fail-building-in-a-vacuum-l2g</guid>
      <description>&lt;p&gt;There is a specific, lonely moment that every technical founder eventually faces. It is the moment the code is clean, the architecture is scalable, and the beta version is ready to launch. You look at your screen, proud of the elegant solution you've built, and you expect the world to beat a path to your door. Instead, the silence is deafening.&lt;/p&gt;

&lt;p&gt;You send out a few emails to your network. You post a LinkedIn update about the new feature. You wait. And you wait.&lt;/p&gt;

&lt;p&gt;This scenario plays out in thousands of garage offices and co-working spaces every single day. The disconnect between a brilliant technical solution and a lack of customers is rarely a failure of the product itself. More often than not, it is a failure of communication. In the world of modern business, technical prowess is no longer enough. If you cannot articulate the value of your work to a human being, your product is effectively invisible.&lt;/p&gt;

&lt;p&gt;This is the harsh reality of the content marketing landscape for technical founders. It is a battlefield where the tools of the trade--algorithms, syntax, and architecture--are pitted against the softer skills of persuasion, empathy, and storytelling. Most technical founders fail not because they lack intelligence, but because they approach content marketing with the wrong mindset. They treat it as an afterthought, a chore, or a translation exercise rather than a strategic asset.&lt;/p&gt;

&lt;p&gt;Understanding why this happens is the first step toward fixing it. It requires looking past the lines of code and examining the psychological barriers that prevent technical leaders from connecting with their audience. It is a journey from being a builder of things to becoming a builder of a brand, and the transition is where the real work begins.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Engineer's Dilemma: Why You're Talking to Yourself
&lt;/h3&gt;

&lt;p&gt;The root of the problem often lies deep in the founder's background. Technical founders are trained to solve problems. They are trained to optimize, to debug, and to find the most efficient path from Point A to Point B. This mode of thinking is analytical, linear, and highly precise. However, content marketing is rarely linear; it is contextual, emotional, and conversational.&lt;/p&gt;

&lt;p&gt;When a technical founder sits down to write a blog post or a social media update, they often fall into the trap of talking to themselves. They write for their peers, for other engineers, or for the imaginary technical review board. They assume that if the reader understands the complexity of the solution, they will automatically understand the value.&lt;/p&gt;

&lt;p&gt;This is a dangerous assumption. The average business user does not care about the specific API endpoint or the algorithmic complexity of your search function. They care about how their life is easier, faster, or more profitable because of what you built. The language of value is not binary; it is human.&lt;/p&gt;

&lt;p&gt;Consider the difference between a technical manual and a marketing page. A manual tells you &lt;em&gt;how&lt;/em&gt; to do something, assuming you already know &lt;em&gt;why&lt;/em&gt; you want to do it. Marketing tells you &lt;em&gt;why&lt;/em&gt; you should do it, and then shows you &lt;em&gt;how&lt;/em&gt;. Technical founders often struggle to make this switch. They view content as a manual for their product, a way to explain how it works, rather than a pitch for its benefits.&lt;/p&gt;

&lt;p&gt;This creates a profound disconnect. You are speaking a language of logic and precision, while your potential customers are looking for a solution to a problem they are feeling emotionally. Until you can translate that complex logic into simple, relatable benefits, you will continue to build in a vacuum. You are the only one who understands the code, and that is a lonely place to be when you are trying to build a business.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Perfectionism Trap: When Good Enough Becomes the Enemy
&lt;/h3&gt;

&lt;p&gt;If the first hurdle is a lack of audience alignment, the second is often paralysis. Technical founders are often perfectionists by nature. They strive for 100% accuracy. They want their documentation to be flawless. They want their code to be bug-free. They apply this same standard to their content.&lt;/p&gt;

&lt;p&gt;However, content marketing is not a research paper. It is a conversation. And conversations, by their very nature, are messy and imperfect. They evolve. They are corrected. They are refined in real-time.&lt;/p&gt;

&lt;p&gt;The "Perfectionism Trap" is the belief that you cannot publish anything until it is absolutely perfect. This mindset is the enemy of growth. In the fast-paced world of digital media, speed is often more important than perfection. By waiting for the "perfect" post, you are often waiting until the market has moved on.&lt;/p&gt;

&lt;p&gt;Furthermore, technical perfectionism often leads to jargon. There is a comfort in using technical terms. It establishes authority. It shows that you are an expert. But it also creates a wall. If a reader has to Google a term just to understand your sentence, you have lost them. The goal of content marketing is to lower the barrier to entry, not to raise it.&lt;/p&gt;

&lt;p&gt;Many organizations have found that their best-performing content is often the simplest. It is the post that explains a complex concept using an analogy that anyone can understand. It is the video that skips the technical deep dive and focuses entirely on the customer's pain point.&lt;/p&gt;

&lt;p&gt;To overcome this, technical founders must learn to let go of the need for total control. They must accept that their first draft will be flawed. They must learn to write for the reader, not for their own ego. The goal is to start the conversation, not to write the final word on the subject. Once you publish, you can iterate, improve, and refine based on real feedback. But you cannot iterate on a file that never leaves your hard drive.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Strategy Gap: Why "Just Posting" Doesn't Work
&lt;/h3&gt;

&lt;p&gt;Closely related to perfectionism is the lack of a coherent strategy. Many technical founders view content marketing as a sporadic activity--a few posts here, a tweet there, and a newsletter update whenever inspiration strikes. They treat it as a hobby rather than a business function.&lt;/p&gt;

&lt;p&gt;This is the "Strategy Gap." Without a plan, content marketing becomes a random walk through the internet, hoping to stumble upon a customer. It is inefficient and unsustainable.&lt;/p&gt;

&lt;p&gt;A true content strategy involves understanding your audience deeply. Who are they? What are their pain points? What questions are they asking? Where do they hang out online? Once you have this intelligence, you can create a content calendar that addresses these specific needs over time.&lt;/p&gt;

&lt;p&gt;It is not enough to simply broadcast that you have launched a new feature. That is "broadcasting," not "marketing." Real marketing involves educating, entertaining, and engaging. It involves solving a problem for the reader before they even realize they have it.&lt;/p&gt;

&lt;p&gt;For a technical founder, this might mean creating a series of "how-to" guides that solve a specific technical problem that your software addresses. It might mean producing case studies that demonstrate how other companies have used your tools to save money or increase efficiency. It means creating content that is valuable in itself, regardless of whether the reader ever buys your product.&lt;/p&gt;

&lt;p&gt;The Strategy Gap is also visible in the lack of consistency. Technical founders often burn out because they try to do it all at once. They decide to start a blog, write a weekly newsletter, post on LinkedIn three times a day, and start a podcast. The result is usually a hasty, low-quality effort across all channels.&lt;/p&gt;

&lt;p&gt;A better approach is to pick one or two channels where your audience actually hangs out and commit to them. Focus on quality and consistency over quantity. Build a library of assets that you can repurpose and update over time. This is not a sprint; it is a marathon.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Blueprint for Conversion: Moving from Code to Conversation
&lt;/h3&gt;

&lt;p&gt;So, how do you fix this? How do you move from a struggling technical founder to a content-savvy leader? The transformation begins with a mindset shift. You must stop thinking like a developer and start thinking like a publisher.&lt;/p&gt;

&lt;p&gt;The first step is to adopt the "Writer's Mindset." This means approaching your writing with empathy. Before you write a single word, ask yourself: "Who is this for?" and "What is their problem?" Write as if you are having a one-on-one conversation with a single person in a coffee shop. Use clear, simple language. Avoid jargon unless you can explain it in plain English.&lt;/p&gt;

&lt;p&gt;The second step is to treat content like a product. Just as you would test your software for bugs, you should test your content. Look at your analytics. Which posts are getting the most engagement? Which ones are driving traffic to your website? Use this data to inform your future content strategy. If a technical deep dive post isn't getting shares, maybe it's too dry. If a "behind the scenes" post is going viral, maybe that is your niche.&lt;/p&gt;

&lt;p&gt;Third, you must integrate content creation into your development cycle. Do not wait until the product is finished to start talking about it. Start writing about the problems you are solving while you are still in the design phase. This not only builds anticipation but also helps you clarify your own thinking. Writing about your vision forces you to articulate it clearly, which is essential for your own understanding.&lt;/p&gt;

&lt;p&gt;Finally, you need to stop trying to be perfect and start trying to be helpful. The most successful technical brands are those that provide genuine value to their community. They answer questions. They share knowledge. They admit when they don't know something. This builds trust. And in business, trust is the currency that buys customers.&lt;/p&gt;

&lt;h3&gt;
  
  
  Your Next Move: Stop Building, Start Talking
&lt;/h3&gt;

&lt;p&gt;The technical founder who understands this truth will have a significant advantage. They will not just build a product; they will build a community. They will not just write code; they will write copy that sells. They will realize that the best product in the world is useless if no one knows it exists.&lt;/p&gt;

&lt;p&gt;The journey from isolation to connection is challenging. It requires learning new skills and stepping out of your comfort zone. It requires admitting that you don't have all the answers and that your audience might know things you don't. But the rewards are immense. You build a brand that resonates. You create a loyal following that advocates for your product. You turn your technical expertise into a powerful marketing asset.&lt;/p&gt;

&lt;p&gt;So, the next time you sit down to write, put down the technical documentation. Pick up the pen. Or open the laptop. Write for the human being on the other side of the screen. Explain the value. Tell the story. And most importantly, listen to the response.&lt;/p&gt;

&lt;p&gt;Your customers are waiting for you to stop building in a vacuum and start talking to them. Are you ready to have the conversation?&lt;/p&gt;




&lt;h3&gt;
  
  
  External Resources for Further Reading
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;HubSpot: The Beginner's Guide to Content Marketing&lt;/strong&gt; - A comprehensive overview of what content marketing is and why it matters for businesses of all sizes.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Neil Patel: How to Write a Blog Post That Converts&lt;/strong&gt; - Practical advice on structuring your content to engage readers and drive action.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Harvard Business Review: The Role of Storytelling in Business&lt;/strong&gt; - Insights into how narrative can be used to build brand identity and connect with audiences.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Moz: The Beginner's Guide to SEO&lt;/strong&gt; - Understanding how content fits into the broader digital marketing ecosystem and search engine visibility.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>technicalmarketingwriteoftenpr</category>
    </item>
    <item>
      <title>gpt-image-2 API Developer Guide: Pricing, Thinking Mode, and Production Integration (2026)</title>
      <dc:creator>tokenmixai</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:31:08 +0000</pubDate>
      <link>https://dev.to/tokenmixai/gpt-image-2-api-developer-guide-pricing-thinking-mode-and-production-integration-2026-28p5</link>
      <guid>https://dev.to/tokenmixai/gpt-image-2-api-developer-guide-pricing-thinking-mode-and-production-integration-2026-28p5</guid>
      <description>&lt;h1&gt;
  
  
  gpt-image-2 API Developer Guide: Pricing, Thinking Mode, and Production Integration (2026)
&lt;/h1&gt;

&lt;p&gt;OpenAI announced &lt;strong&gt;gpt-image-2&lt;/strong&gt; on April 21, 2026 — but the official API doesn't open to developers until &lt;strong&gt;early May 2026&lt;/strong&gt;. That gap between "announced" and "shippable" is exactly when developers need to architect, budget, and prototype. This guide covers everything a developer needs to know &lt;em&gt;now&lt;/em&gt;: the published pricing math, the Instant/Thinking mode trade-offs, the multi-image API contract, pre-release access via fal.ai and apiyi, and a cost calculator template you can drop into a project today. Code examples in Python, all working against either the pre-release third-party endpoints or the OpenAI API once it goes live in early May. &lt;a href="https://tokenmix.ai" rel="noopener noreferrer"&gt;TokenMix.ai&lt;/a&gt; tracks gpt-image-2 alongside 50+ image models for teams comparing inference cost and routing per task.&lt;/p&gt;

&lt;h2&gt;
  
  
  Table of Contents
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;What Developers Need to Know in One Page&lt;/li&gt;
&lt;li&gt;Pricing Breakdown: Per-Token, Per-Image, Per-Workflow&lt;/li&gt;
&lt;li&gt;Instant vs Thinking Mode: When to Use Which&lt;/li&gt;
&lt;li&gt;Pre-Release API Access (fal.ai, apiyi)&lt;/li&gt;
&lt;li&gt;Code: Single Image Generation&lt;/li&gt;
&lt;li&gt;Code: 8-Image Consistent Series&lt;/li&gt;
&lt;li&gt;Code: Image Editing / Inpainting&lt;/li&gt;
&lt;li&gt;Cost Calculator Template&lt;/li&gt;
&lt;li&gt;Migrating from gpt-image-1 / DALL-E 3&lt;/li&gt;
&lt;li&gt;Rate Limits, Errors, and Production Gotchas&lt;/li&gt;
&lt;li&gt;FAQ&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What Developers Need to Know in One Page {#tldr}
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Quick answer&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Model name&lt;/td&gt;
&lt;td&gt;&lt;code&gt;gpt-image-2&lt;/code&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Modes&lt;/td&gt;
&lt;td&gt;
&lt;code&gt;instant&lt;/code&gt; (default), &lt;code&gt;thinking&lt;/code&gt; (opt-in)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Released&lt;/td&gt;
&lt;td&gt;April 21, 2026 (ChatGPT/Codex)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;API GA&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Early May 2026&lt;/strong&gt; (OpenAI direct)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Pre-release access&lt;/td&gt;
&lt;td&gt;fal.ai, apiyi (third-party hosted)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Max resolution&lt;/td&gt;
&lt;td&gt;2000px long edge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Aspect ratios&lt;/td&gt;
&lt;td&gt;1:1, 3:2, 2:3, 16:9, 9:16, 3:1, 1:3&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-image per call&lt;/td&gt;
&lt;td&gt;&lt;strong&gt;Up to 8 with character/object continuity&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web search grounding&lt;/td&gt;
&lt;td&gt;Yes (in Thinking mode)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Per-image cost&lt;/td&gt;
&lt;td&gt;~$0.21 at 1024×1024 HD standard&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Token-level pricing&lt;/td&gt;
&lt;td&gt;$5/$10/$8/$30 per MTok (text-in / text-out / image-in / image-out)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;SDK&lt;/td&gt;
&lt;td&gt;Same &lt;code&gt;openai&lt;/code&gt; Python/Node client, new endpoint pattern&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Image editing&lt;/td&gt;
&lt;td&gt;Supported (same endpoint family as gpt-image-1)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Content policy&lt;/td&gt;
&lt;td&gt;Same as ChatGPT — no NSFW, no real persons, no copyrighted characters&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;If you're an existing OpenAI image API user, &lt;strong&gt;the migration is mechanical&lt;/strong&gt;: change &lt;code&gt;model="gpt-image-1"&lt;/code&gt; to &lt;code&gt;model="gpt-image-2"&lt;/code&gt;, optionally add &lt;code&gt;quality="thinking"&lt;/code&gt; for complex prompts, optionally request &lt;code&gt;n=8&lt;/code&gt; for consistent series.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pricing Breakdown: Per-Token, Per-Image, Per-Workflow {#pricing}
&lt;/h2&gt;

&lt;p&gt;OpenAI pricing for gpt-image-2 (per &lt;a href="https://openai.com/api/pricing/" rel="noopener noreferrer"&gt;official pricing page&lt;/a&gt;):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Direction&lt;/th&gt;
&lt;th&gt;$/M tokens&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Input text&lt;/td&gt;
&lt;td&gt;$5&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output text&lt;/td&gt;
&lt;td&gt;$10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Input image&lt;/td&gt;
&lt;td&gt;$8&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Output image&lt;/td&gt;
&lt;td&gt;$30&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Why per-token instead of per-image?
&lt;/h3&gt;

&lt;p&gt;Because gpt-image-2 charges for the &lt;strong&gt;planning work&lt;/strong&gt; (prompt comprehension, reasoning steps, web-search results) plus the actual pixel output. A simple "cat on a chair" costs less than "magazine cover with 5 cover lines and a hero photo." Per-token billing captures that.&lt;/p&gt;

&lt;h3&gt;
  
  
  Per-image cost cheat sheet
&lt;/h3&gt;

&lt;p&gt;Approximate cost per image, assuming a 50-token text prompt:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resolution&lt;/th&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;th&gt;Approximate cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;1024×1024&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;$0.10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024×1024&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;td&gt;$0.21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024×1024 HD&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;$0.21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1024×1024 HD&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;td&gt;$0.40&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1792×1024&lt;/td&gt;
&lt;td&gt;Instant&lt;/td&gt;
&lt;td&gt;$0.18&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;1792×1024&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;td&gt;$0.35&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;2000×1125 (max)&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;td&gt;~$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Workflow cost examples
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Workflow&lt;/th&gt;
&lt;th&gt;Calls&lt;/th&gt;
&lt;th&gt;Estimated cost&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Single hero image, 1024×1024 HD&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;$0.21&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;8-image storyboard, 1024×1024&lt;/td&gt;
&lt;td&gt;1 (n=8)&lt;/td&gt;
&lt;td&gt;~$1.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Magazine cover, Thinking mode, 2000×1125&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;~$0.50&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Daily 100 social posts, 1024×1024 Instant&lt;/td&gt;
&lt;td&gt;100&lt;/td&gt;
&lt;td&gt;~$10/day&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Marketing campaign: 50 multilingual variants, Thinking, HD&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;~$20&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;For teams generating thousands of images per day, &lt;a href="https://tokenmix.ai" rel="noopener noreferrer"&gt;TokenMix.ai&lt;/a&gt; tracks live pricing across gpt-image-2, Imagen 4 Ultra, Seedream 5, FLUX, and others — and lets you route per task (text-heavy → gpt-image-2, stylized → Midjourney, budget → FLUX).&lt;/p&gt;

&lt;h2&gt;
  
  
  Instant vs Thinking Mode: When to Use Which {#modes}
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Aspect&lt;/th&gt;
&lt;th&gt;Instant&lt;/th&gt;
&lt;th&gt;Thinking&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Latency&lt;/td&gt;
&lt;td&gt;3-5s&lt;/td&gt;
&lt;td&gt;10-30s&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cost multiplier&lt;/td&gt;
&lt;td&gt;1×&lt;/td&gt;
&lt;td&gt;2-3×&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Best for&lt;/td&gt;
&lt;td&gt;Single concept, short prompts, casual content&lt;/td&gt;
&lt;td&gt;Multi-element prompts, infographics, structured layouts, multilingual text, web-grounded content&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;When it self-verifies&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes — checks output and re-renders if needed&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Web search&lt;/td&gt;
&lt;td&gt;No&lt;/td&gt;
&lt;td&gt;Yes&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Multi-image consistency (n=8)&lt;/td&gt;
&lt;td&gt;Available, but quality lower&lt;/td&gt;
&lt;td&gt;
&lt;strong&gt;Recommended&lt;/strong&gt; — planning step ensures continuity&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Decision tree
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Is the prompt &amp;gt; 30 words OR contains structured info (text, layout, multilingual)?
├── Yes → Thinking mode
└── No
    └── Is web-grounded data needed (current weather, real maps, etc.)?
        ├── Yes → Thinking mode
        └── No
            └── Is multi-image continuity required (n &amp;gt; 1)?
                ├── Yes → Thinking mode
                └── No → Instant mode
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In practice: &lt;strong&gt;default Instant, opt into Thinking&lt;/strong&gt; when the prompt has structure or multi-image requirements.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pre-Release API Access (fal.ai, apiyi) {#pre-release}
&lt;/h2&gt;

&lt;p&gt;OpenAI's official API GA is early May 2026. For teams that need to prototype now, two third-party providers expose pre-release gpt-image-2 endpoints:&lt;/p&gt;

&lt;h3&gt;
  
  
  fal.ai
&lt;/h3&gt;

&lt;p&gt;OpenAI partner, hosts gpt-image-2 at &lt;code&gt;fal-ai/openai/gpt-image-2&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;fal_client&lt;/span&gt;

&lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;fal_client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;subscribe&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;fal-ai/openai/gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;arguments&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prompt&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Magazine cover, hero photo of a coffee shop, headline &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Brew Renaissance&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; in bold serif&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;image_size&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;portrait_16_9&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;images&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;][&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  apiyi.com
&lt;/h3&gt;

&lt;p&gt;Aggregator with gpt-image-2 access at fixed per-call pricing (~$0.03/call standard, varies):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;your-apiyi-key&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;base_url&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;https://api.apiyi.com/v1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1024x1024&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;quality&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hd&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Caveat&lt;/strong&gt;: pre-release endpoints have variable rate limits, occasional outages, and may not match the final OpenAI API contract exactly. Use for prototyping, not production.&lt;/p&gt;

&lt;h2&gt;
  
  
  Code: Single Image Generation {#code-single}
&lt;/h2&gt;

&lt;p&gt;Once OpenAI's API opens (early May 2026), the canonical pattern:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;api_key&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;sk-...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Restaurant menu cover, &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Saigon Street Food&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;, dark wood texture background, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
           &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;bilingual Vietnamese-English, photographic style&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1024x1536&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;      &lt;span class="c1"&gt;# portrait
&lt;/span&gt;    &lt;span class="n"&gt;quality&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;hd&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;quality_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;instant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="c1"&gt;# or "thinking"
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;image_url&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;
&lt;span class="c1"&gt;# or response.data[0].b64_json if using response_format="b64_json"
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Saving the image
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;

&lt;span class="n"&gt;img_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;image_url&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;menu_cover.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Inline base64 (avoid the URL fetch step)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;

&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;...&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;response_format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;b64_json&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;img_bytes&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;base64&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;b64decode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;b64_json&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_bytes&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Code: 8-Image Consistent Series {#code-multi}
&lt;/h2&gt;

&lt;p&gt;The flagship feature. Single API call, 8 outputs, character/scene continuity preserved:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;8-panel storyboard for a 30-second ad: a young engineer arrives at a coffee shop, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;opens a laptop, codes intensely, has an aha moment, ships a feature, celebrates, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;shares with team, day ends. Consistent character (woman, mid-20s, glasses, purple hoodie), &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;consistent setting (warm-lit coffee shop). Cinematic style.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1792x1024&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;quality_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# required for true consistency
&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;img&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;enumerate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;img_data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;requests&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;
    &lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;storyboard_&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;i&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;wb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;f&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;img_data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Use cases unlocked
&lt;/h3&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Use case&lt;/th&gt;
&lt;th&gt;n&lt;/th&gt;
&lt;th&gt;Mode&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Comic strip&lt;/td&gt;
&lt;td&gt;4-8&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Product variations (colors/angles)&lt;/td&gt;
&lt;td&gt;4-8&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Sequential tutorial steps&lt;/td&gt;
&lt;td&gt;4-8&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;A/B creative variants&lt;/td&gt;
&lt;td&gt;2-4&lt;/td&gt;
&lt;td&gt;Instant or Thinking&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Manga panel sequence&lt;/td&gt;
&lt;td&gt;6-8&lt;/td&gt;
&lt;td&gt;Thinking&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Code: Image Editing / Inpainting {#code-edit}
&lt;/h2&gt;

&lt;p&gt;Same endpoint pattern as gpt-image-1, with the new model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="k"&gt;with&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;original.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;image_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nf"&gt;open&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mask.png&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;rb&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;mask_file&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;edit&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;image&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;image_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;mask&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;mask_file&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Replace the background with a sunset beach, keep the subject&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1024x1024&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;url&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The &lt;code&gt;mask.png&lt;/code&gt; should be the same dimensions as &lt;code&gt;image.png&lt;/code&gt; with transparent areas marking what to edit.&lt;/p&gt;

&lt;h2&gt;
  
  
  Cost Calculator Template {#cost-calc}
&lt;/h2&gt;

&lt;p&gt;Drop-in cost estimator for budgeting:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;PRICING&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_text_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;5.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_text_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;10.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_image_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;8.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_image_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mf"&gt;30.00&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;estimate_cost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;prompt_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;output_image_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;n_images&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;thinking_mode&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;bool&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;input_image_tokens&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;int&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Rough cost estimate in USD.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="c1"&gt;# Thinking mode adds reasoning tokens (rough estimate: 2-3x input)
&lt;/span&gt;    &lt;span class="n"&gt;reasoning_multiplier&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;2.5&lt;/span&gt; &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;thinking_mode&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="mf"&gt;1.0&lt;/span&gt;

    &lt;span class="n"&gt;input_text_cost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;prompt_tokens&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;reasoning_multiplier&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;PRICING&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_text_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;
    &lt;span class="n"&gt;input_image_cost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;input_image_tokens&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;PRICING&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_image_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;
    &lt;span class="n"&gt;output_image_cost&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="n"&gt;output_image_tokens&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;n_images&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt; &lt;span class="n"&gt;PRICING&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_image_per_mtok&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt; &lt;span class="mi"&gt;1_000_000&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_text&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_text_cost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;input_image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_image_cost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;output_image&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;output_image_cost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;total&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nf"&gt;round&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;input_text_cost&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;input_image_cost&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;output_image_cost&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;


&lt;span class="c1"&gt;# Example: HD 1024x1024, Thinking mode, single image
# Rough token mapping: 1024x1024 HD ≈ 6800 output tokens
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;estimate_cost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;prompt_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;80&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;output_image_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;6800&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;n_images&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;thinking_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="c1"&gt;# {'input_text': 0.001, 'input_image': 0.0, 'output_image': 0.204, 'total': 0.205}
&lt;/span&gt;
&lt;span class="c1"&gt;# Example: 8-image storyboard, Thinking
&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nf"&gt;estimate_cost&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;prompt_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;200&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;output_image_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;4500&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;  &lt;span class="c1"&gt;# standard 1024x1024
&lt;/span&gt;    &lt;span class="n"&gt;n_images&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;thinking_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="c1"&gt;# {'input_text': 0.0025, 'input_image': 0.0, 'output_image': 1.08, 'total': 1.0825}
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For per-call billing visibility across providers (gpt-image-2, Imagen, FLUX, Seedream), &lt;a href="https://tokenmix.ai" rel="noopener noreferrer"&gt;TokenMix.ai&lt;/a&gt; exposes a unified usage dashboard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Migrating from gpt-image-1 / DALL-E 3 {#migration}
&lt;/h2&gt;

&lt;h3&gt;
  
  
  From gpt-image-1
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Old
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-1&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...)&lt;/span&gt;

&lt;span class="c1"&gt;# New (mechanical change)
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...)&lt;/span&gt;

&lt;span class="c1"&gt;# Optional: opt into Thinking mode for complex prompts
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...,&lt;/span&gt;
    &lt;span class="n"&gt;quality_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Optional: request multi-image
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...,&lt;/span&gt;
    &lt;span class="n"&gt;n&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;quality_mode&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  From DALL-E 3
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Old
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;dall-e-3&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...,&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1024x1024&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# New
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-image-2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;...,&lt;/span&gt; &lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;1024x1024&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The response shape (&lt;code&gt;response.data[0].url&lt;/code&gt; / &lt;code&gt;b64_json&lt;/code&gt;) is unchanged. Existing code that handles the response will work without modification.&lt;/p&gt;

&lt;h3&gt;
  
  
  Things to retest after migration
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Prompt sensitivity&lt;/strong&gt; — gpt-image-2 follows prompts more literally than DALL-E 3. Prompts that worked via "vibes" may need to be more specific&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Negative prompts&lt;/strong&gt; — neither model exposes formal negative prompts, but gpt-image-2's reasoning can interpret natural-language exclusions ("no people in the scene") more reliably&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Style anchors&lt;/strong&gt; — gpt-image-2 leans more "photorealistic / commercial" by default; explicitly request style ("watercolor", "anime", "low-poly 3D") if needed&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Rate Limits, Errors, and Production Gotchas {#production}
&lt;/h2&gt;

&lt;p&gt;Based on the published OpenAI rate limit structure (subject to change at GA):&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Tier&lt;/th&gt;
&lt;th&gt;Images per minute&lt;/th&gt;
&lt;th&gt;Tokens per minute&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Tier 1&lt;/td&gt;
&lt;td&gt;5&lt;/td&gt;
&lt;td&gt;100K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tier 2&lt;/td&gt;
&lt;td&gt;50&lt;/td&gt;
&lt;td&gt;500K&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Tier 3+&lt;/td&gt;
&lt;td&gt;200+&lt;/td&gt;
&lt;td&gt;2M+&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h3&gt;
  
  
  Common errors
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;RateLimitError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;APITimeoutError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;BadRequestError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;APIError&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;

&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_with_retry&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="nf"&gt;range&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
        &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;images&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;**&lt;/span&gt;&lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;RateLimitError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;wait&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;random&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;random&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
            &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;wait&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;APITimeoutError&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Thinking mode can timeout on very complex prompts
&lt;/span&gt;            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quality_mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;kwargs&lt;/span&gt; &lt;span class="ow"&gt;and&lt;/span&gt; &lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quality_mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;thinking&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;kwargs&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;quality_mode&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;instant&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;  &lt;span class="c1"&gt;# downgrade and retry
&lt;/span&gt;            &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;raise&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;BadRequestError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="c1"&gt;# Often: prompt violates content policy
&lt;/span&gt;            &lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Bad request: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;raise&lt;/span&gt;
        &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="n"&gt;APIError&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="k"&gt;raise&lt;/span&gt;
            &lt;span class="n"&gt;time&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;sleep&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="o"&gt;**&lt;/span&gt; &lt;span class="n"&gt;attempt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;raise&lt;/span&gt; &lt;span class="nc"&gt;RuntimeError&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;All retries exhausted&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Production gotchas
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Timeout default is 60s&lt;/strong&gt; — Thinking mode can hit this on complex 8-image batches. Set explicit &lt;code&gt;timeout=120&lt;/code&gt; for n=8 + Thinking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Image URLs expire&lt;/strong&gt; — Per OpenAI's policy, hosted URLs expire in ~2 hours. Always download or store the b64_json variant for long-term assets&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Content policy blocks return 400, not 403&lt;/strong&gt; — Catch &lt;code&gt;BadRequestError&lt;/code&gt; specifically and parse the message for "content_policy" before retrying&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cost surprise on Thinking + n=8&lt;/strong&gt; — A single n=8 Thinking call can cost $1-2. Add a hard budget check before invoking&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Token estimation is hard&lt;/strong&gt; — OpenAI doesn't publish a tokenizer for image outputs. Use observed average tokens-per-resolution from initial calls and budget conservatively&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  FAQ {#faq}
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Q: When can I use gpt-image-2 in production?&lt;/strong&gt;&lt;br&gt;
A: OpenAI's API GA is early May 2026. For pre-GA prototyping, fal.ai and apiyi expose endpoints today, but with variable reliability. For mission-critical work, wait for GA.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How do I integrate gpt-image-2 into a multi-model image gen system?&lt;/strong&gt;&lt;br&gt;
A: Use the OpenAI-compatible image endpoint. The &lt;code&gt;model&lt;/code&gt; parameter is the only thing that changes between gpt-image-2, Imagen 4 Ultra (via Vertex AI compat), Seedream 5, etc. A unified API gateway like &lt;a href="https://tokenmix.ai" rel="noopener noreferrer"&gt;TokenMix.ai&lt;/a&gt; abstracts the provider differences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Can I fine-tune gpt-image-2?&lt;/strong&gt;&lt;br&gt;
A: Not at launch. OpenAI hasn't announced fine-tuning for the gpt-image series.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Does gpt-image-2 support function calling / tool use during generation?&lt;/strong&gt;&lt;br&gt;
A: In Thinking mode, the model can invoke web search internally. External tool use (custom functions) is not exposed in the image generation API.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: What's the maximum prompt length?&lt;/strong&gt;&lt;br&gt;
A: Officially documented at 32,000 input tokens, but in practice prompts over ~500 tokens see diminishing returns. For long context, use the structure-aware Thinking mode.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Does gpt-image-2 work for image-to-image transformations?&lt;/strong&gt;&lt;br&gt;
A: Yes, via the &lt;code&gt;images.edit&lt;/code&gt; endpoint with an input image and optional mask. Style transfer, inpainting, and variations all work. Pure image-to-image generation (no mask) is also supported.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: How do I prevent gpt-image-2 from refusing valid prompts?&lt;/strong&gt;&lt;br&gt;
A: Avoid: real-person likenesses, copyrighted characters/brands, NSFW, violence. Be specific about safety-relevant elements ("a fictional character", "abstract symbol"). If you hit unjustified refusals, file a feedback ticket via OpenAI's developer console.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q: Should I switch from Midjourney for production?&lt;/strong&gt;&lt;br&gt;
A: Depends on workload. For text-heavy, multi-image, or multilingual content — yes, gpt-image-2 wins on quality and unblocks workflows that were impossible. For pure stylized art, Midjourney V7 still has the edge. Many teams will run both.&lt;/p&gt;




&lt;h2&gt;
  
  
  Sources
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://openai.com/index/introducing-chatgpt-images-2-0/" rel="noopener noreferrer"&gt;OpenAI: Introducing ChatGPT Images 2.0&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://developers.openai.com/api/docs/models/gpt-image-2" rel="noopener noreferrer"&gt;OpenAI gpt-image-2 Model Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://openai.com/api/pricing/" rel="noopener noreferrer"&gt;OpenAI API Pricing Page&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://techcrunch.com/2026/04/21/chatgpts-new-images-2-0-model-is-surprisingly-good-at-generating-text/" rel="noopener noreferrer"&gt;TechCrunch Coverage&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://venturebeat.com/technology/openais-chatgpt-images-2-0-is-here-and-it-does-multilingual-text-full-infographics-slides-maps-even-manga-seemingly-flawlessly" rel="noopener noreferrer"&gt;VentureBeat: Multi-language + Multi-image&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://fal.ai/models/openai/gpt-image-2" rel="noopener noreferrer"&gt;fal.ai gpt-image-2 endpoint&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://help.apiyi.com/en/gpt-image-2-official-launch-beginner-complete-guide-en.html" rel="noopener noreferrer"&gt;apiyi.com gpt-image-2 access&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://apidog.com/blog/gpt-images-2/" rel="noopener noreferrer"&gt;Apidog: What's New in ChatGPT Images 2.0&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;em&gt;By TokenMix Research Lab · Updated 2026-04-23&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>chatgpt</category>
      <category>openai</category>
      <category>mcp</category>
    </item>
    <item>
      <title>The Primeagen's '99': A New Approach to AI in Code Editors</title>
      <dc:creator>Stelixx Insider</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:20:46 +0000</pubDate>
      <link>https://dev.to/stelixx-insider/the-primeagens-99-a-new-approach-to-ai-in-code-editors-2nea</link>
      <guid>https://dev.to/stelixx-insider/the-primeagens-99-a-new-approach-to-ai-in-code-editors-2nea</guid>
      <description>&lt;p&gt;The Perpetual Struggle: Why Current AI Integrations in Code Editors Often Fail&lt;/p&gt;

&lt;p&gt;Many developers have experienced the frustration of integrating AI assistants directly into their code editors. The common pitfalls include: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Clunky User Experience:&lt;/strong&gt; AI tools that feel out of place and disrupt the natural coding flow.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Lack of Contextual Understanding:&lt;/strong&gt; AI that doesn't grasp the nuances of your specific codebase, leading to irrelevant suggestions.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Distraction Over Assistance:&lt;/strong&gt; The AI becoming more of a hindrance than a help.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is precisely the problem that The Primeagen's "99" project aims to solve. By focusing on intuitive design and deep contextual awareness, "99" is poised to redefine how developers interact with AI, making it a true productivity enhancer.&lt;/p&gt;

&lt;p&gt;This open-source initiative represents a significant step forward in making AI an indispensable and seamless part of the developer's toolkit. We're excited to see the community's contribution and evolution of this project.&lt;/p&gt;

&lt;h1&gt;
  
  
  Stelixx #StelixxInsights #IdeaToImpact #AI #BuilderCommunity #CodeEditorAI #DevCommunity #OpenSource
&lt;/h1&gt;

</description>
      <category>ai</category>
      <category>web3</category>
      <category>blockchain</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Best resources to learn game development with Node.js</title>
      <dc:creator>Stack Overflowed</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:20:38 +0000</pubDate>
      <link>https://dev.to/stack_overflowed/best-resources-to-learn-game-development-with-nodejs-560h</link>
      <guid>https://dev.to/stack_overflowed/best-resources-to-learn-game-development-with-nodejs-560h</guid>
      <description>&lt;p&gt;Every few months, someone asks me, &lt;em&gt;What are the best resources to learn game development with Node.js?&lt;/em&gt; The question usually comes from one of two places. Either someone has fallen in love with JavaScript and wants to build games without leaving the ecosystem, or they’ve seen that multiplayer games often use Node on the backend and want to understand how it all fits together.&lt;/p&gt;

&lt;p&gt;The tricky part is that Node.js is not a game engine. It doesn’t render sprites. It doesn’t handle physics out of the box. It doesn’t manage scenes. So if you approach the question assuming Node will teach you “game development” the way Unity or Unreal does, you’ll feel confused.&lt;/p&gt;

&lt;p&gt;But if you reframe the question and understand what Node is actually good at, the learning path becomes clearer.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Node.js is suited for certain types of games
&lt;/h2&gt;

&lt;p&gt;Node.js shines in one specific domain of game development: &lt;strong&gt;real-time, networked systems&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Think:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Multiplayer backends
&lt;/li&gt;
&lt;li&gt;Matchmaking services
&lt;/li&gt;
&lt;li&gt;Chat systems
&lt;/li&gt;
&lt;li&gt;Game state synchronization
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The event-driven, non-blocking architecture of Node makes it well-suited for handling many concurrent connections.&lt;/p&gt;

&lt;p&gt;Node is ideal for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Real-time multiplayer servers using WebSockets
&lt;/li&gt;
&lt;li&gt;Turn-based backend logic
&lt;/li&gt;
&lt;li&gt;Leaderboards and persistent APIs
&lt;/li&gt;
&lt;li&gt;Matchmaking and session management
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is &lt;strong&gt;not ideal&lt;/strong&gt; for:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Graphics-heavy engines
&lt;/li&gt;
&lt;li&gt;CPU-intensive simulations
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This distinction prevents wasted effort.&lt;/p&gt;

&lt;h2&gt;
  
  
  Frontend game engines vs. backend game logic
&lt;/h2&gt;

&lt;p&gt;A common confusion is mixing frontend and backend responsibilities.&lt;/p&gt;

&lt;h3&gt;
  
  
  Frontend (Client-side)
&lt;/h3&gt;

&lt;p&gt;Handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rendering
&lt;/li&gt;
&lt;li&gt;Animation loops
&lt;/li&gt;
&lt;li&gt;Input handling
&lt;/li&gt;
&lt;li&gt;Physics simulation
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Backend (Server-side with Node.js)
&lt;/h3&gt;

&lt;p&gt;Handles:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Player authentication
&lt;/li&gt;
&lt;li&gt;State synchronization
&lt;/li&gt;
&lt;li&gt;Anti-cheat validation
&lt;/li&gt;
&lt;li&gt;Matchmaking
&lt;/li&gt;
&lt;li&gt;Persistent storage
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Node.js lives entirely in the backend layer.&lt;/p&gt;

&lt;p&gt;In multiplayer systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;strong&gt;client renders&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;The &lt;strong&gt;server validates and controls&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That authority model is critical.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why foundational Node knowledge matters first
&lt;/h2&gt;

&lt;p&gt;Before using multiplayer frameworks, you must understand:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Event loop
&lt;/li&gt;
&lt;li&gt;Asynchronous programming
&lt;/li&gt;
&lt;li&gt;Streams
&lt;/li&gt;
&lt;li&gt;Networking basics
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Without this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Blocking operations will break performance
&lt;/li&gt;
&lt;li&gt;Async logic becomes unmanageable
&lt;/li&gt;
&lt;li&gt;Networking becomes fragile
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Educative – &lt;a href="https://www.educative.io/courses/learn-node-js" rel="noopener noreferrer"&gt;Learn Node.js&lt;/a&gt;
&lt;/h3&gt;

&lt;p&gt;This course focuses on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Core Node architecture
&lt;/li&gt;
&lt;li&gt;Async patterns
&lt;/li&gt;
&lt;li&gt;HTTP servers
&lt;/li&gt;
&lt;li&gt;Modular design
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It is not game-specific, but it builds the foundation required for multiplayer systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  How different resource types complement each other
&lt;/h2&gt;

&lt;p&gt;Game development with Node requires combining multiple resource types.&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Resource Type&lt;/th&gt;
&lt;th&gt;Focus&lt;/th&gt;
&lt;th&gt;Strength&lt;/th&gt;
&lt;th&gt;Best For&lt;/th&gt;
&lt;th&gt;Limitations&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Structured Node Course (e.g., Educative)&lt;/td&gt;
&lt;td&gt;Core architecture&lt;/td&gt;
&lt;td&gt;Clear progression, strong fundamentals&lt;/td&gt;
&lt;td&gt;Beginners building backend knowledge&lt;/td&gt;
&lt;td&gt;Not game-specific&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Official Node &amp;amp; WebSocket Docs&lt;/td&gt;
&lt;td&gt;API precision&lt;/td&gt;
&lt;td&gt;Deep technical reference&lt;/td&gt;
&lt;td&gt;Refining networking understanding&lt;/td&gt;
&lt;td&gt;Requires prior knowledge&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Game Networking Tutorials&lt;/td&gt;
&lt;td&gt;Real-time examples&lt;/td&gt;
&lt;td&gt;Shows integration patterns&lt;/td&gt;
&lt;td&gt;Bridging theory and practice&lt;/td&gt;
&lt;td&gt;Often oversimplified&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Open-Source Multiplayer Projects&lt;/td&gt;
&lt;td&gt;Real-world architecture&lt;/td&gt;
&lt;td&gt;Exposure to production patterns&lt;/td&gt;
&lt;td&gt;Intermediate learners&lt;/td&gt;
&lt;td&gt;Can be overwhelming&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Each resource serves a different purpose. None replace each other.&lt;/p&gt;

&lt;h2&gt;
  
  
  A narrative walkthrough: from Node basics to multiplayer server
&lt;/h2&gt;

&lt;p&gt;Let’s walk through a realistic progression.&lt;/p&gt;

&lt;h3&gt;
  
  
  Step 1: Learn Node fundamentals
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;HTTP servers
&lt;/li&gt;
&lt;li&gt;Async callbacks
&lt;/li&gt;
&lt;li&gt;Modules
&lt;/li&gt;
&lt;li&gt;API structure
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 2: Build a WebSocket system
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Create a chat server
&lt;/li&gt;
&lt;li&gt;Handle persistent connections
&lt;/li&gt;
&lt;li&gt;Broadcast messages
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 3: Build a simple real-time prototype
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Shared whiteboard or counter
&lt;/li&gt;
&lt;li&gt;Introduce state synchronization
&lt;/li&gt;
&lt;li&gt;Encounter race conditions
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 4: Introduce server authority
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Validate player actions
&lt;/li&gt;
&lt;li&gt;Separate logic layers
&lt;/li&gt;
&lt;li&gt;Manage state centrally
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Step 5: Build a simple multiplayer game
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Real-time position updates
&lt;/li&gt;
&lt;li&gt;Tick rates
&lt;/li&gt;
&lt;li&gt;Latency handling
&lt;/li&gt;
&lt;li&gt;Prediction techniques
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At this stage, you move from coding to &lt;strong&gt;system design thinking&lt;/strong&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  When you’re ready to build a real-time multiplayer server
&lt;/h2&gt;

&lt;p&gt;You’re ready when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You understand the event loop deeply
&lt;/li&gt;
&lt;li&gt;You avoid blocking operations
&lt;/li&gt;
&lt;li&gt;You handle shared state safely
&lt;/li&gt;
&lt;li&gt;You understand authoritative server models
&lt;/li&gt;
&lt;li&gt;You can reason about scaling
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are backend engineering skills, not game engine skills.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mixing theory and practical experimentation
&lt;/h2&gt;

&lt;p&gt;A common mistake is relying only on quick tutorials.&lt;/p&gt;

&lt;p&gt;Problems you’ll face in real systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Reconnection handling
&lt;/li&gt;
&lt;li&gt;Server crashes
&lt;/li&gt;
&lt;li&gt;Cheating prevention
&lt;/li&gt;
&lt;li&gt;Horizontal scaling
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To handle these:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Theory → gives mental models
&lt;/li&gt;
&lt;li&gt;Practice → reveals real-world issues
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You need both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Choosing your learning path intentionally
&lt;/h2&gt;

&lt;p&gt;Your path depends on your starting point.&lt;/p&gt;

&lt;h3&gt;
  
  
  If you’re new to backend development:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Start with Node fundamentals
&lt;/li&gt;
&lt;li&gt;Focus on async patterns and architecture
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  If you know Node but not networking:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Learn WebSockets
&lt;/li&gt;
&lt;li&gt;Study state synchronization
&lt;/li&gt;
&lt;li&gt;Explore distributed systems
&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  If you know both:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Build real projects
&lt;/li&gt;
&lt;li&gt;Create matchmaking systems
&lt;/li&gt;
&lt;li&gt;Design multiplayer prototypes
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;At every stage, ask:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Am I copying patterns, or do I understand them?&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Bringing it back to the core question
&lt;/h2&gt;

&lt;p&gt;If someone asks again, &lt;em&gt;What are the best resources to learn game development with Node.js?&lt;/em&gt;, the answer is not a simple list.&lt;/p&gt;

&lt;p&gt;It’s a layered approach:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Strengthen Node fundamentals
&lt;/li&gt;
&lt;li&gt;Learn networking concepts
&lt;/li&gt;
&lt;li&gt;Experiment with real-time systems
&lt;/li&gt;
&lt;li&gt;Study architecture through projects
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Node is not a game engine. It is a &lt;strong&gt;coordination engine&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;When you treat it that way, your learning path becomes clear.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The best resources are not the flashiest ones. They are the ones that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Build strong fundamentals
&lt;/li&gt;
&lt;li&gt;Improve reasoning
&lt;/li&gt;
&lt;li&gt;Expose real-world complexity
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Game development with Node is really about building distributed systems for real-time interaction.&lt;/p&gt;

&lt;p&gt;And the moment you shift from syntax to architecture—that’s when real learning begins.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>javascript</category>
      <category>node</category>
      <category>resources</category>
    </item>
    <item>
      <title>Building an AI Helpdesk SaaS with Agentic Automation</title>
      <dc:creator>Datheon</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:19:12 +0000</pubDate>
      <link>https://dev.to/datheon/building-an-ai-helpdesk-saas-with-agentic-automation-3dcb</link>
      <guid>https://dev.to/datheon/building-an-ai-helpdesk-saas-with-agentic-automation-3dcb</guid>
      <description>&lt;p&gt;Hey devs 👋&lt;/p&gt;

&lt;p&gt;I just built a full-stack AI SaaS helpdesk platform focused on automation, intelligent ticket handling, and what many are starting to call &lt;em&gt;agentic AI systems&lt;/em&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  🚀 The Idea
&lt;/h2&gt;

&lt;p&gt;Most support systems are reactive.&lt;/p&gt;

&lt;p&gt;I wanted something that:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Understands tickets automatically&lt;/li&gt;
&lt;li&gt;Makes decisions&lt;/li&gt;
&lt;li&gt;Takes actions without constant human input&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Inspired by platforms like Deptheon-style architectures, I designed a system that behaves more like an &lt;strong&gt;intelligent operator&lt;/strong&gt; than just a tool.&lt;/p&gt;

&lt;h2&gt;
  
  
  🧱 Tech Stack
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;FastAPI + PostgreSQL (backend)&lt;/li&gt;
&lt;li&gt;React + TypeScript + Tailwind (frontend)&lt;/li&gt;
&lt;li&gt;Ollama (Llama3) for local AI&lt;/li&gt;
&lt;li&gt;n8n for automation (29 workflows 🤯)&lt;/li&gt;
&lt;li&gt;Stripe for billing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  ⚙️ What It Does
&lt;/h2&gt;

&lt;p&gt;Every ticket is automatically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Categorized &amp;amp; prioritized&lt;/li&gt;
&lt;li&gt;Sentiment analyzed&lt;/li&gt;
&lt;li&gt;Checked for duplicates&lt;/li&gt;
&lt;li&gt;Assigned to the best available agent&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Then AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Generates replies&lt;/li&gt;
&lt;li&gt;Detects frustrated users&lt;/li&gt;
&lt;li&gt;Auto-resolves common issues&lt;/li&gt;
&lt;li&gt;Builds a knowledge base&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🤖 Agentic Layer
&lt;/h2&gt;

&lt;p&gt;Instead of simple LLM calls, the system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Observes&lt;/li&gt;
&lt;li&gt;Decides&lt;/li&gt;
&lt;li&gt;Acts&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;That’s where the real power comes in.&lt;/p&gt;

&lt;h2&gt;
  
  
  🔁 Automation
&lt;/h2&gt;

&lt;p&gt;With n8n, I implemented:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SLA breach alerts&lt;/li&gt;
&lt;li&gt;Churn prediction&lt;/li&gt;
&lt;li&gt;Incident detection&lt;/li&gt;
&lt;li&gt;Auto follow-ups&lt;/li&gt;
&lt;li&gt;Smart ticket routing&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  🧠 What I Learned
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;AI alone isn’t enough — orchestration is everything&lt;/li&gt;
&lt;li&gt;Automation + LLMs = real leverage&lt;/li&gt;
&lt;li&gt;“Agentic systems” are structured decision systems (not magic)&lt;/li&gt;
&lt;li&gt;Local AI is underrated&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  📊 Final Thought
&lt;/h2&gt;

&lt;p&gt;We’re moving from:&lt;br&gt;
&lt;strong&gt;AI features → AI systems that operate businesses&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;And that changes everything.&lt;/p&gt;




&lt;p&gt;Would love feedback or ideas from anyone building in AI / SaaS 🙌&lt;/p&gt;

</description>
      <category>ai</category>
      <category>saas</category>
      <category>webdev</category>
      <category>automation</category>
    </item>
    <item>
      <title>The Nine-Year Journey to the Orca Emoji (U+1FACD) — How a Single Character Moved the World</title>
      <dc:creator>upa_rupa</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:18:26 +0000</pubDate>
      <link>https://dev.to/upa_rupa/the-nine-year-journey-to-the-orca-emoji-u1facd-how-a-single-character-moved-the-world-elg</link>
      <guid>https://dev.to/upa_rupa/the-nine-year-journey-to-the-orca-emoji-u1facd-how-a-single-character-moved-the-world-elg</guid>
      <description>&lt;p&gt;I love orcas, so I always felt a bit sad that while we had whale (🐋) and dolphin (🐬) emojis, there was no orca. Then recently, I came across news that the orca emoji would be introduced in iOS 26.4&lt;sup&gt;[1]&lt;/sup&gt;. Since emoji aren't an Apple-proprietary thing — they're defined by an international standard called Unicode — I knew this had to be a bigger story. So I dug in.&lt;/p&gt;

&lt;p&gt;What I found was the story behind the new orca emoji &lt;span&gt;🫍&lt;/span&gt;: a nine-year campaign by people around the world pushing for its adoption into the Unicode standard.&lt;/p&gt;

&lt;h2&gt;
  
  
  Basic Info
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Field&lt;/th&gt;
&lt;th&gt;Details&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Code Point&lt;/td&gt;
&lt;td&gt;U+1FACD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Name&lt;/td&gt;
&lt;td&gt;ORCA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Block&lt;/td&gt;
&lt;td&gt;Symbols and Pictographs Extended-A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Standard Version&lt;/td&gt;
&lt;td&gt;Unicode 17.0 / Emoji 17.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Release Date&lt;/td&gt;
&lt;td&gt;September 9, 2025&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;This emoji was added as a new standalone code point&lt;sup&gt;[2]&lt;/sup&gt; — not expressed by combining existing emoji, but assigned its own unique code: &lt;code&gt;U+1FACD&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  2016–2017: Voices Rise Around the World
&lt;/h2&gt;

&lt;p&gt;The orca emoji story begins well before any official proposal, and in multiple places at once.&lt;/p&gt;

&lt;p&gt;In August 2016, Jökull Ingi Þorvaldsson from Iceland started an online petition on Change.org called "Make a Killer Whale Emoji," asking Apple to add an orca emoji&lt;sup&gt;[4]&lt;/sup&gt;. It gathered 39 signatures.&lt;/p&gt;

&lt;p&gt;The following February (2017), Christoph Päper, a Unicode contributor from Germany, opened an issue titled "Orca emoji" on the GitHub Unicode proposals tracker (Crissov/unicode-proposals)&lt;sup&gt;[3]&lt;/sup&gt;. His point: we have whale (🐋) and dolphin (🐬) emoji — why not orca? The issue included a link to the Change.org petition above.&lt;/p&gt;

&lt;p&gt;The voices were there. But voices alone don't create emoji. A formal proposal had to be submitted to the Unicode Consortium.&lt;/p&gt;

&lt;h2&gt;
  
  
  February 2019: A Third Person Steps Up
&lt;/h2&gt;

&lt;p&gt;In 2019, a Spanish developer named &lt;strong&gt;Marcos Del Sol Vives&lt;/strong&gt; turned these feelings into a formal proposal&lt;sup&gt;[5]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;On February 20th, Marcos submitted a proposal for the orca emoji to the Unicode Consortium's Emoji Subcommittee (ESC). He had no connection to Jökull in Iceland or Christoph in Germany — he was a third, independent voice.&lt;/p&gt;

&lt;p&gt;Then in September 2020, Lukas Ewert in Germany (also independently) launched another Change.org petition, "Make Orcas an Emoji," collecting 356 signatures&lt;sup&gt;[15]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;People around the world, strangers to each other, were all asking for the same thing: &lt;strong&gt;an orca emoji&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Marcos's proposal followed the format Unicode requires and made a compelling case&lt;sup&gt;[6]&lt;/sup&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Search Popularity Comparison
&lt;/h3&gt;

&lt;p&gt;Using Bing search trends, Marcos showed that "orca" had roughly the same search popularity as "elephant." Elephants already have an emoji (🐘) — orcas don't. He quantified the gap.&lt;/p&gt;

&lt;h3&gt;
  
  
  No Existing Emoji Covers It
&lt;/h3&gt;

&lt;p&gt;Unicode's proposal review includes an "exclusion factors" section, where proposers must argue why their submission shouldn't be excluded. Marcos addressed this directly&lt;sup&gt;[6]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;"Can't an existing emoji substitute?" — No. Orcas are commonly called "killer whales" but are scientifically members of the dolphin family, and they look quite different from whales. "Too specialized?" — Pufferfish, crickets, and swans already have emoji, so orca can't be considered too specialized. "Just a passing trend?" — Orcas have existed on Earth for about 11 million years, the proposal notes.&lt;/p&gt;

&lt;h3&gt;
  
  
  Sample Images
&lt;/h3&gt;

&lt;p&gt;Following Unicode Consortium requirements, Marcos prepared sample images at 18×18 and 72×72 pixels, in both black-and-white and color.&lt;/p&gt;

&lt;h2&gt;
  
  
  2019–2023: The Gate Is Closed
&lt;/h2&gt;

&lt;p&gt;From here, Marcos's proposal entered a long wait.&lt;/p&gt;

&lt;p&gt;The proposal was sent in 2019. But the formal document number &lt;strong&gt;L2/24-249&lt;/strong&gt; wasn't registered in the Unicode document registry until &lt;strong&gt;2024&lt;/strong&gt;&lt;sup&gt;[6]&lt;/sup&gt;. There is no orca-related document anywhere in the 2019 Document Register&lt;sup&gt;[7]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;So what happened during those five years? Looking into it, what emerged wasn't simple neglect — it was a deliberate decision on Unicode's part.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Iron Rule: Once Added, Never Removed
&lt;/h3&gt;

&lt;p&gt;First, some background: Unicode has an absolute rule called the &lt;strong&gt;Stability Policy&lt;/strong&gt;&lt;sup&gt;[17]&lt;/sup&gt;.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once a character is encoded, it will not be moved or removed.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This means that once a code point is assigned, it stays there as long as humanity uses this standard — forever. Mistakes cannot be undone. This is the foundation of Unicode's caution around adding emoji.&lt;/p&gt;

&lt;h3&gt;
  
  
  A Strategic Pause — Shifting to Quality Over Quantity
&lt;/h3&gt;

&lt;p&gt;In 2020, the COVID-19 pandemic pushed back the Unicode 14.0 release by six months&lt;sup&gt;[2]&lt;/sup&gt;. Then in autumn 2022, the UTC (Unicode Technical Committee) announced that Unicode 15.1 would be a limited release.&lt;/p&gt;

&lt;p&gt;ESC chair Jennifer Daniel saw this as an opportunity. Her January 2023 blog post, "Breaking the Cycle"&lt;sup&gt;[18]&lt;/sup&gt;, stated:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;emoji categories are about to hit or have hit a level of saturation.&lt;/p&gt;

&lt;p&gt;the ESC approves fewer and fewer emoji proposals every year.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The ESC used this pause to tackle longstanding issues: unifying skin tone variations, redesigning family emoji, and improving bidirectional text support. They also decided to &lt;strong&gt;temporarily delay the Unicode 17.0 submission window until April 2024&lt;/strong&gt;&lt;sup&gt;[18]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;This wasn't a shutdown from neglect. It was a &lt;strong&gt;deliberate pause to redefine the emoji addition process itself&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Marcos's Proposal, Waiting at the Gate
&lt;/h3&gt;

&lt;p&gt;There is no record of the orca emoji being rejected. It doesn't appear on Charlotte Buff's list of rejected emoji proposals&lt;sup&gt;[10]&lt;/sup&gt;, nor in the official Unicode non-approval notice archive&lt;sup&gt;[11]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Marcos's proposal wasn't denied. The gate was simply closed.&lt;/p&gt;

&lt;h2&gt;
  
  
  2024: The Gate Reopens
&lt;/h2&gt;

&lt;p&gt;On April 2, 2024, submissions reopened with new guidelines&lt;sup&gt;[8]&lt;/sup&gt;. ESC chair Jennifer Daniel announced the reopening&lt;sup&gt;[9]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;When the gate opened, Marcos's proposal — waiting since 2019 — finally received its formal document number, &lt;strong&gt;L2/24-249&lt;/strong&gt;&lt;sup&gt;[6]&lt;/sup&gt;. Later that year in November, the ESC proposed 164 new emoji candidates to the UTC, including the orca&lt;sup&gt;[12]&lt;/sup&gt;. Of those 164: 9 new code points and roughly 155 skin tone variations of existing emoji. One of the new code points — the Apple Core — was ultimately withdrawn, and the remaining 163 were approved as Emoji 17.0.&lt;/p&gt;

&lt;p&gt;No objections were submitted during the Public Review Issue (PRI #515) for Emoji 17.0 candidates&lt;sup&gt;[16]&lt;/sup&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  September 9, 2025: Official Adoption
&lt;/h2&gt;

&lt;p&gt;As part of Unicode 17.0 / Emoji 17.0, the orca emoji was officially approved&lt;sup&gt;[2]&lt;/sup&gt;. About &lt;strong&gt;nine years&lt;/strong&gt; from the first online petition; about &lt;strong&gt;six and a half years&lt;/strong&gt; from Marcos's proposal.&lt;/p&gt;

&lt;p&gt;Marcos himself hasn't spoken publicly much about the details of this journey. His site orca.pet records the fact of the proposal and the fact of adoption — simply&lt;sup&gt;[5]&lt;/sup&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Emoji Approval Process: Open, but Careful
&lt;/h2&gt;

&lt;p&gt;Unicode's emoji approval process is designed to be transparent&lt;sup&gt;[13]&lt;/sup&gt;.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Submit a proposal&lt;/strong&gt;: Anyone can submit an emoji proposal&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ESC review&lt;/strong&gt;: The Emoji Subcommittee evaluates proposals and decides whether to recommend them to the UTC&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UTC deliberation&lt;/strong&gt;: Discussed by the technical committee; meeting notes are published&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Draft candidate list&lt;/strong&gt;: Before final approval, a candidate list is published as a Public Review Issue (PRI), open for public feedback, which is also published&lt;sup&gt;[16]&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Official release&lt;/strong&gt;: Released as a new version of the Unicode standard&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;All proposals are published as PDFs on unicode.org and available for anyone to read&lt;sup&gt;[6]&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;Given that an assigned code point can never be removed, this caution has good reason. As the orca's case shows, years between proposal and adoption aren't unusual. But that's not negligence — it's a reflection of the weight of defining a standard that will be used, permanently, across the world.&lt;/p&gt;

&lt;h2&gt;
  
  
  Platform Support Status (as of March 2026)
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Platform&lt;/th&gt;
&lt;th&gt;Status&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Google Noto Color Emoji&lt;/td&gt;
&lt;td&gt;Supported (v2.051, released September 12, 2025)&lt;sup&gt;[14]&lt;/sup&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apple (iOS / macOS)&lt;/td&gt;
&lt;td&gt;In beta with iOS 26.4 / macOS 26.4; stable release expected March–April 2026&lt;sup&gt;[1]&lt;/sup&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;X (formerly Twitter)&lt;/td&gt;
&lt;td&gt;Supported (Twemoji v17.0)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft (Windows)&lt;/td&gt;
&lt;td&gt;Not yet supported; as of March 2026, just reached Emoji 16.0 support&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Unicode defines the code point and meaning, but the visual design is left to each platform (Apple, Google, Samsung, etc.). That's why the same &lt;code&gt;U+1FACD&lt;/code&gt; looks different on iPhone versus Android.&lt;/p&gt;

&lt;h2&gt;
  
  
  Display Test with Noto Color Emoji
&lt;/h2&gt;

&lt;p&gt;This article loads Google's &lt;strong&gt;Noto Color Emoji&lt;/strong&gt; font to test rendering of the new orca emoji.&lt;/p&gt;

&lt;p&gt;&lt;span&gt;🫍&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;Even if your browser or OS doesn't support Unicode 17.0 emoji yet, you should see it rendered via Noto Color Emoji. If you can see a large orca above, it's working.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The orca emoji (🫍) was born from independent voices in Iceland, Germany, and Spain — and took nine years to make it into the Unicode standard&lt;/li&gt;
&lt;li&gt;Proposer Marcos Del Sol Vives built a case backed by search data and comparative arguments&lt;/li&gt;
&lt;li&gt;The five-year gap wasn't neglect — it was a deliberate pause for Unicode to confront emoji saturation and redefine the addition process&lt;/li&gt;
&lt;li&gt;Unicode's emoji approval process is transparent, but the Stability Policy demands care&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Every emoji we casually use has a story behind it. And those stories are often less about triumph than about patience and a series of coincidences stacking up over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  References
&lt;/h2&gt;

&lt;ol&gt;
&lt;li id="ref-1"&gt;iPhone Mania, "iOS26.4で利用可能になる新絵文字のデザインが明らかに！ (Japanese)" March 10, 2026. &lt;a href="https://iphone-mania.jp/ios-600850/" rel="noopener noreferrer"&gt;https://iphone-mania.jp/ios-600850/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-2"&gt;The Unicode Blog, "Unicode 17.0 Release Announcement," September 9, 2025. &lt;a href="http://blog.unicode.org/2025/09/unicode-170-release-announcement.html" rel="noopener noreferrer"&gt;http://blog.unicode.org/2025/09/unicode-170-release-announcement.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-3"&gt;Crissov/unicode-proposals, "Issue #103: Orca emoji," GitHub, February 5, 2017. &lt;a href="https://github.com/Crissov/unicode-proposals/issues/103" rel="noopener noreferrer"&gt;https://github.com/Crissov/unicode-proposals/issues/103&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-4"&gt;Jökull Ingi Þorvaldsson, "Make a Killer Whale Emoji," Change.org, August 2016. &lt;a href="https://www.change.org/p/apple-make-a-killer-whale-emoji-in-apple-s-emoji-board" rel="noopener noreferrer"&gt;https://www.change.org/p/apple-make-a-killer-whale-emoji-in-apple-s-emoji-board&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-5"&gt;Marcos Del Sol Vives, "Orca emoji," orca.pet. &lt;a href="https://orca.pet/emoji/" rel="noopener noreferrer"&gt;https://orca.pet/emoji/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-6"&gt;Marcos Del Sol Vives, "Proposal for Orca Emoji," Unicode Document L2/24-249, 2024. &lt;a href="https://www.unicode.org/L2/L2024/24249-orca-emoji.pdf" rel="noopener noreferrer"&gt;https://www.unicode.org/L2/L2024/24249-orca-emoji.pdf&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-7"&gt;The Unicode Consortium, "UTC Document Register — 2019." &lt;a href="https://www.unicode.org/L2/L2019/Register-2019.html" rel="noopener noreferrer"&gt;https://www.unicode.org/L2/L2019/Register-2019.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-8"&gt;The Unicode Blog, "Emoji Submissions Intake Process Re-opening," March 2024. &lt;a href="http://blog.unicode.org/2024/03/emoji-submissions-intake-process-re.html" rel="noopener noreferrer"&gt;http://blog.unicode.org/2024/03/emoji-submissions-intake-process-re.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-9"&gt;Jennifer Daniel, "Emoji submissions re-opening," Substack. &lt;a href="https://jenniferdaniel.substack.com/p/emoji-submissions-re-opening-april" rel="noopener noreferrer"&gt;https://jenniferdaniel.substack.com/p/emoji-submissions-re-opening-april&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-10"&gt;Charlotte Buff, "Rejected Emoji Proposals." &lt;a href="https://charlottebuff.com/unicode/misc/rejected-emoji-proposals/" rel="noopener noreferrer"&gt;https://charlottebuff.com/unicode/misc/rejected-emoji-proposals/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-11"&gt;The Unicode Consortium, "Archive of Notices of Non-Approval." &lt;a href="https://www.unicode.org/alloc/nonapprovals.html" rel="noopener noreferrer"&gt;https://www.unicode.org/alloc/nonapprovals.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-12"&gt;Emojipedia Blog, "What's New In Unicode 17.0." &lt;a href="https://blog.emojipedia.org/whats-new-in-unicode-17-0/" rel="noopener noreferrer"&gt;https://blog.emojipedia.org/whats-new-in-unicode-17-0/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-13"&gt;The Unicode Consortium, "Submitting Emoji Proposals." &lt;a href="https://unicode.org/emoji/proposals.html" rel="noopener noreferrer"&gt;https://unicode.org/emoji/proposals.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-14"&gt;Emojipedia Blog, "Google Debuts Emoji 17.0 Support." &lt;a href="https://blog.emojipedia.org/google-debuts-emoji-17-0-support/" rel="noopener noreferrer"&gt;https://blog.emojipedia.org/google-debuts-emoji-17-0-support/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-15"&gt;Lukas Ewert, "Make Orcas an Emoji," Change.org, September 2020. &lt;a href="https://www.change.org/p/apple-make-orcas-an-emoji" rel="noopener noreferrer"&gt;https://www.change.org/p/apple-make-orcas-an-emoji&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-16"&gt;The Unicode Consortium, "PRI #515: Unicode Emoji 17.0 Alpha Repertoire." &lt;a href="https://www.unicode.org/review/pri515/" rel="noopener noreferrer"&gt;https://www.unicode.org/review/pri515/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-17"&gt;The Unicode Consortium, "Unicode Character Encoding Stability Policies." &lt;a href="https://www.unicode.org/policies/stability_policy.html" rel="noopener noreferrer"&gt;https://www.unicode.org/policies/stability_policy.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-18"&gt;Jennifer Daniel, "Breaking the Cycle," The Unicode Blog, March 8, 2024 (originally published January 17, 2023). &lt;a href="http://blog.unicode.org/2024/03/breaking-cycle.html" rel="noopener noreferrer"&gt;http://blog.unicode.org/2024/03/breaking-cycle.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;




&lt;p&gt;&lt;em&gt;This article was originally published in Japanese at &lt;a href="https://archelon-inc.jp/blog/unicode-orca-emoji-story" rel="noopener noreferrer"&gt;archelon-inc.jp&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>unicode</category>
      <category>emoji</category>
      <category>macos</category>
      <category>orca</category>
    </item>
    <item>
      <title>シャチ絵文字（U+1FACD）がUnicodeに採用されるまで — 1文字のために世界が動いた9年越しの軌跡</title>
      <dc:creator>upa_rupa</dc:creator>
      <pubDate>Thu, 23 Apr 2026 05:18:22 +0000</pubDate>
      <link>https://dev.to/upa_rupa/siyatihui-wen-zi-u1facdgaunicodenicai-yong-sarerumade-1wen-zi-notamenishi-jie-gadong-ita9nian-yue-sinogui-ji-1dbn</link>
      <guid>https://dev.to/upa_rupa/siyatihui-wen-zi-u1facdgaunicodenicai-yong-sarerumade-1wen-zi-notamenishi-jie-gadong-ita9nian-yue-sinogui-ji-1dbn</guid>
      <description>&lt;h1&gt;
  
  
  シャチ絵文字（U+1FACD）がUnicodeに採用されるまで — 1文字のために世界が動いた9年越しの軌跡
&lt;/h1&gt;

&lt;p&gt;私はシャチが好きである。なので、クジラ（&lt;span&gt;🐋&lt;/span&gt;）やイルカ（&lt;span&gt;🐬&lt;/span&gt;）の絵文字はあるのにシャチがないことを残念に思っていた。&lt;br&gt;&lt;br&gt;
しかし最近、iOS 26.4からシャチの絵文字が追加されるというニュースを目にした&lt;sup&gt;[1]&lt;/sup&gt;。絵文字はApple独自のものではなくUnicodeという国際標準で定められているはずだから、これはもっと大きな話のはずだ — そう思って調べてみた。&lt;/p&gt;

&lt;p&gt;すると見えてきたのは、新しいシャチの絵文字 &lt;span&gt;🫍&lt;/span&gt; の裏にある、9年にわたって世界各地の人々がUnicode標準への採用を求め続けた物語だった。&lt;/p&gt;

&lt;h2&gt;
  
  
  シャチ絵文字の基本情報
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;項目&lt;/th&gt;
&lt;th&gt;内容&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;コードポイント&lt;/td&gt;
&lt;td&gt;U+1FACD&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;名称&lt;/td&gt;
&lt;td&gt;ORCA&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;収録ブロック&lt;/td&gt;
&lt;td&gt;Symbols and Pictographs Extended-A&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;標準バージョン&lt;/td&gt;
&lt;td&gt;Unicode 17.0 / Emoji 17.0&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;リリース日&lt;/td&gt;
&lt;td&gt;2025年9月9日&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;この絵文字は単独の新規コードポイントとして追加された&lt;sup&gt;[2]&lt;/sup&gt;。既存の絵文字を組み合わせて表現するのではなく、&lt;code&gt;U+1FACD&lt;/code&gt; という固有のコードが割り当てられている。&lt;/p&gt;

&lt;h2&gt;
  
  
  2016年〜2017年 — 世界各地から湧き上がった声
&lt;/h2&gt;

&lt;p&gt;シャチ絵文字の物語は、正式な提案よりもずっと前に、しかも別々の場所で始まっている。&lt;/p&gt;

&lt;p&gt;2016年8月、アイスランドの Jökull Ingi Þorvaldsson がChange.orgで「Make a Killer Whale Emoji」というオンライン署名活動を開始した&lt;sup&gt;[4]&lt;/sup&gt;。Appleに対してシャチの絵文字を作るよう求めるこの署名には39名が賛同した。&lt;/p&gt;

&lt;p&gt;翌2017年2月、ドイツのUnicodeコントリビューターである Christoph Päper が、GitHubのUnicode提案トラッカー（Crissov/unicode-proposals）に「Orca emoji」のissueを立てた&lt;sup&gt;[3]&lt;/sup&gt;。クジラ（&lt;span&gt;🐋&lt;/span&gt;）やイルカ（&lt;span&gt;🐬&lt;/span&gt;）の絵文字は存在するのに、シャチがないのはおかしい — そんな問題提起だった。このissueには前述のChange.org署名へのリンクも添えられていた。&lt;/p&gt;

&lt;p&gt;声は上がった。しかし声だけでは絵文字は生まれない。Unicode Consortiumに正式な提案書を提出する必要がある。&lt;/p&gt;

&lt;h2&gt;
  
  
  2019年2月 — 3人目が動く
&lt;/h2&gt;

&lt;p&gt;2019年、この想いを1本の提案書に変えた人物がいる。スペインの開発者、&lt;strong&gt;Marcos Del Sol Vives&lt;/strong&gt; だ&lt;sup&gt;[5]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;2月20日、MarcosはUnicode Consortiumの絵文字小委員会（Emoji Subcommittee、以下ESC）にシャチ絵文字の提案を送った。アイスランドのJökull、ドイツのChristophとは面識のない、独立した3人目の人物だった。&lt;/p&gt;

&lt;p&gt;さらに2020年9月、ドイツの Lukas Ewert が、これらの動きとは別にChange.org署名「Make Orcas an Emoji」を立ち上げ、356名の賛同を集めている&lt;sup&gt;[15]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;世界各地の、互いに知らない人々が独立して同じことを求めていた。&lt;strong&gt;シャチの絵文字がほしい&lt;/strong&gt;、と。&lt;/p&gt;

&lt;p&gt;Marcosの提案書には、Unicode Consortiumが求めるフォーマットに従った説得力のある内容が含まれていた&lt;sup&gt;[6]&lt;/sup&gt;。&lt;/p&gt;

&lt;h3&gt;
  
  
  検索人気度の比較
&lt;/h3&gt;

&lt;p&gt;Bingでの検索トレンドを使い、「orca」の検索人気度が「elephant（象）」とほぼ同等であることを示した。象にはすでに絵文字（&lt;span&gt;🐘&lt;/span&gt;）があるのに、シャチにはない — この不均衡を数字で可視化した。&lt;/p&gt;

&lt;h3&gt;
  
  
  既存の絵文字では代替できない
&lt;/h3&gt;

&lt;p&gt;Unicode の提案審査には「除外要因」という項目があり、提案者はなぜ除外すべきでないかを自ら論証する必要がある。Marcosの提案書はこれに対して明快だった&lt;sup&gt;[6]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;「既存の絵文字で代用できるのでは？」— できない。シャチは「キラーホエール」と呼ばれるが科学的にはイルカの仲間であり、クジラとは外見がかなり異なる。「特殊すぎるのでは？」— フグ、コオロギ、白鳥といった動物がすでに絵文字になっている以上、シャチが特殊すぎるとは言えない。「一時的な流行では？」— シャチは約1,100万年前から地球にいる、と提案書は答えている。&lt;/p&gt;

&lt;h3&gt;
  
  
  サンプル画像の添付
&lt;/h3&gt;

&lt;p&gt;Unicode Consortiumの規定に従い、18x18ピクセルと72x72ピクセルのサンプル画像を白黒・カラーの両方で用意した。&lt;/p&gt;

&lt;h2&gt;
  
  
  2019年〜2023年 — 閉ざされた門
&lt;/h2&gt;

&lt;p&gt;しかし、ここからMarcosの提案は長い待ち時間に入る。&lt;/p&gt;

&lt;p&gt;提案は2019年に送られた。しかし、正式な提案書番号 &lt;strong&gt;L2/24-249&lt;/strong&gt; がUnicode文書レジストリに登録されたのは &lt;strong&gt;2024年&lt;/strong&gt; のことだった&lt;sup&gt;[6]&lt;/sup&gt;。2019年のDocument Registerにシャチ関連の文書は一件も存在しない&lt;sup&gt;[7]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;この5年間、何が起きていたのか。調べていくと、単なる「放置」ではない、Unicode側の意図的な判断が見えてきた。&lt;/p&gt;

&lt;h3&gt;
  
  
  一度付けたら消せない — Unicodeの鉄の掟
&lt;/h3&gt;

&lt;p&gt;まず前提として、Unicodeには &lt;strong&gt;不変性（Stability Policy）&lt;/strong&gt; という絶対的なルールがある&lt;sup&gt;[17]&lt;/sup&gt;。&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once a character is encoded, it will not be moved or removed.&lt;br&gt;
（一度エンコードされた文字は、移動も削除もされない）&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;つまり、一度コードポイントを割り当てたら、それは人類がこの標準を使い続ける限り永久に残る。間違いがあっても取り消せない。この鉄の掟が、絵文字の追加に対する慎重さの根本にある。&lt;/p&gt;

&lt;h3&gt;
  
  
  戦略的休止 — 「量より質」への転換
&lt;/h3&gt;

&lt;p&gt;2020年、COVID-19の影響でUnicode 14.0のリリースが6ヶ月延期された&lt;sup&gt;[2]&lt;/sup&gt;。そして2022年秋、UTC（Unicode Technical Committee）はUnicode 15.1を限定的なリリースにすると発表した。&lt;/p&gt;

&lt;p&gt;ESCの議長 Jennifer Daniel は、この状況を「機会」と捉えた。彼女が2023年1月に発表したブログ記事「Breaking the Cycle（循環を断ち切る）」&lt;sup&gt;[18]&lt;/sup&gt;には、こう書かれている。&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;emoji categories are about to hit or have hit a level of saturation.&lt;br&gt;
（絵文字のカテゴリは飽和に達しつつある、あるいは既に達している）&lt;/p&gt;

&lt;p&gt;the ESC approves fewer and fewer emoji proposals every year.&lt;br&gt;
（ESCは年々、承認する絵文字提案を減らしている）&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;ESCはこの休止期間を使い、スキントーン（肌の色）のバリエーション統一、家族絵文字の再設計、書字方向の対応といった積年の課題に取り組んだ。そして &lt;strong&gt;Unicode 17.0の提案受付を2024年4月まで一時的に遅らせる&lt;/strong&gt; と決めた&lt;sup&gt;[18]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;これは怠慢による停止ではなく、絵文字の追加プロセスそのものを再定義するための &lt;strong&gt;意図的な休止&lt;/strong&gt; だった。&lt;/p&gt;

&lt;h3&gt;
  
  
  門の前で待つMarcosの提案
&lt;/h3&gt;

&lt;p&gt;シャチ絵文字が「却下」された記録は見つからない。Charlotte Buffが管理する却下済み絵文字提案リストにも掲載されていない&lt;sup&gt;[10]&lt;/sup&gt;。Unicode公式の非承認通知アーカイブにも見当たらない&lt;sup&gt;[11]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;Marcosの提案は否定されたのではない。門が閉まっていたのだ。&lt;/p&gt;

&lt;h2&gt;
  
  
  2024年 — 門が開く
&lt;/h2&gt;

&lt;p&gt;2024年4月2日、新しいガイドラインとともに提案受付が再開された&lt;sup&gt;[8]&lt;/sup&gt;。ESCの議長Jennifer Danielもこの再開を告知している&lt;sup&gt;[9]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;門が開いたとき、2019年からずっと待っていたMarcosの提案はようやく正式な文書番号 &lt;strong&gt;L2/24-249&lt;/strong&gt; を得た&lt;sup&gt;[6]&lt;/sup&gt;。同年11月、ESCが164個の新絵文字候補をUTC に提案し、シャチもその中に含まれていた&lt;sup&gt;[12]&lt;/sup&gt;。164個の内訳は、新規コードポイント9個と既存絵文字のスキントーンバリエーション約155個。最終的に新規コードポイントのうち「りんごの芯（Apple Core）」1個が取り下げられ、残りの163個がEmoji 17.0として承認された。&lt;/p&gt;

&lt;p&gt;なお、Emoji 17.0の候補に対するPublic Review Issue（PRI #515）では、反対意見は寄せられなかった&lt;sup&gt;[16]&lt;/sup&gt;。&lt;/p&gt;

&lt;h2&gt;
  
  
  2025年9月9日 — 正式採用
&lt;/h2&gt;

&lt;p&gt;Unicode 17.0 / Emoji 17.0の一部として、シャチ絵文字は正式に承認された&lt;sup&gt;[2]&lt;/sup&gt;。最初のオンライン署名から&lt;strong&gt;約9年&lt;/strong&gt;、Marcosの提案からでも&lt;strong&gt;約6年半&lt;/strong&gt;。&lt;/p&gt;

&lt;p&gt;提案者のMarcos自身は、この経緯の詳細を公には多く語っていない。自身のサイト orca.pet には、提案の事実と採用の事実が簡潔に記録されているのみだ&lt;sup&gt;[5]&lt;/sup&gt;。&lt;/p&gt;

&lt;h2&gt;
  
  
  Unicodeの承認プロセス — オープン、しかし慎重
&lt;/h2&gt;

&lt;p&gt;Unicodeの絵文字承認プロセスは、透明性が高い設計になっている&lt;sup&gt;[13]&lt;/sup&gt;。&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;提案の提出&lt;/strong&gt;: 誰でも絵文字の提案書を提出できる&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ESCでの審査&lt;/strong&gt;: 絵文字小委員会が提案を評価し、UTCに推薦するかを判断する&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;UTCでの審議&lt;/strong&gt;: 技術委員会で議論される。議事録も公開されている&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ドラフト候補リストの公開&lt;/strong&gt;: 正式承認前に候補リストが公開され、Public Review Issue（PRI）として一般からのフィードバックを受け付ける。寄せられたフィードバックは公開される&lt;sup&gt;[16]&lt;/sup&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;正式リリース&lt;/strong&gt;: Unicode標準の新バージョンとしてリリースされる&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;提案書はすべてunicode.orgでPDFとして公開されており、誰でも読むことができる&lt;sup&gt;[6]&lt;/sup&gt;。&lt;/p&gt;

&lt;p&gt;一度割り当てたコードポイントは永久に消せないという不変性の掟がある以上、この慎重さには理由がある。シャチの事例が示すように、提案から採用まで数年かかることは珍しくない。しかしそれは、怠慢ではなく、世界中で永久に使われる標準を定めることの重さの表れでもある。&lt;/p&gt;

&lt;h2&gt;
  
  
  各プラットフォームの対応状況（2026年3月時点）
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;プラットフォーム&lt;/th&gt;
&lt;th&gt;対応状況&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Google Noto Color Emoji&lt;/td&gt;
&lt;td&gt;対応済み（v2.051、2025年9月12日リリース）&lt;sup&gt;[14]&lt;/sup&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Apple（iOS / macOS）&lt;/td&gt;
&lt;td&gt;iOS 26.4 / macOS 26.4 ベータで対応中。正式版は2026年3〜4月予定&lt;sup&gt;[1]&lt;/sup&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;X（旧Twitter）&lt;/td&gt;
&lt;td&gt;対応済み（Twemoji v17.0）&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Microsoft（Windows）&lt;/td&gt;
&lt;td&gt;未対応。2026年3月時点でEmoji 16.0に対応したばかり&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Unicodeがコードポイントと意味を定めるが、実際の見た目は各プラットフォーム（Apple、Google、Samsung等）がそれぞれ独自にデザインする。同じ &lt;code&gt;U+1FACD&lt;/code&gt; でも、iPhoneとAndroidでは見た目が異なるのはそのためだ。&lt;/p&gt;

&lt;h2&gt;
  
  
  Noto Color Emoji で表示テスト
&lt;/h2&gt;

&lt;p&gt;この記事では、Google の &lt;strong&gt;Noto Color Emoji&lt;/strong&gt; フォントを読み込んで、新しいシャチ絵文字の表示をテストしている。&lt;/p&gt;

&lt;p&gt;&lt;span&gt;🫍&lt;/span&gt;&lt;/p&gt;

&lt;p&gt;お使いのブラウザやOSがUnicode 17.0の絵文字に対応していない場合でも、Noto Color Emojiフォントを通じて表示されるはずだ。上に大きなシャチが見えていれば成功である。&lt;/p&gt;

&lt;h2&gt;
  
  
  まとめ
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;シャチ絵文字（&lt;span&gt;🫍&lt;/span&gt;）は、アイスランド、ドイツ、スペインと世界各地の独立した声から始まり、9年かけてUnicode標準に採用された&lt;/li&gt;
&lt;li&gt;提案者の Marcos Del Sol Vives さんは、検索データや比較論証を駆使した提案書を作成した&lt;/li&gt;
&lt;li&gt;5年間の空白は「放置」ではなく、Unicodeが絵文字の飽和に向き合い、追加プロセスを再定義するための戦略的休止だった&lt;/li&gt;
&lt;li&gt;Unicodeの絵文字承認プロセスは透明性が高いが、不変性という鉄の掟ゆえに慎重さが求められる&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;普段何気なく使っている絵文字の一つひとつに、こうした物語がある。そしてその物語は、華々しい成功譚ではなく、忍耐と偶然の積み重ねであることが多いのかもしれない。&lt;/p&gt;

&lt;h2&gt;
  
  
  参考文献
&lt;/h2&gt;

&lt;ol&gt;
&lt;li id="ref-1"&gt;iPhone Mania, "iOS26.4で利用可能になる新絵文字のデザインが明らかに！" 2026年3月10日. &lt;a href="https://iphone-mania.jp/ios-600850/" rel="noopener noreferrer"&gt;https://iphone-mania.jp/ios-600850/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-2"&gt;The Unicode Blog, "Unicode 17.0 Release Announcement," 2025年9月9日. &lt;a href="http://blog.unicode.org/2025/09/unicode-170-release-announcement.html" rel="noopener noreferrer"&gt;http://blog.unicode.org/2025/09/unicode-170-release-announcement.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-3"&gt;Crissov/unicode-proposals, "Issue #103: Orca emoji," GitHub, 2017年2月5日. &lt;a href="https://github.com/Crissov/unicode-proposals/issues/103" rel="noopener noreferrer"&gt;https://github.com/Crissov/unicode-proposals/issues/103&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-4"&gt;Jökull Ingi Þorvaldsson, "Make a Killer Whale Emoji," Change.org, 2016年8月. &lt;a href="https://www.change.org/p/apple-make-a-killer-whale-emoji-in-apple-s-emoji-board" rel="noopener noreferrer"&gt;https://www.change.org/p/apple-make-a-killer-whale-emoji-in-apple-s-emoji-board&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-5"&gt;Marcos Del Sol Vives, "Orca emoji," orca.pet. &lt;a href="https://orca.pet/emoji/" rel="noopener noreferrer"&gt;https://orca.pet/emoji/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-6"&gt;Marcos Del Sol Vives, "Proposal for Orca Emoji," Unicode Document L2/24-249, 2024年. &lt;a href="https://www.unicode.org/L2/L2024/24249-orca-emoji.pdf" rel="noopener noreferrer"&gt;https://www.unicode.org/L2/L2024/24249-orca-emoji.pdf&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-7"&gt;The Unicode Consortium, "UTC Document Register — 2019." &lt;a href="https://www.unicode.org/L2/L2019/Register-2019.html" rel="noopener noreferrer"&gt;https://www.unicode.org/L2/L2019/Register-2019.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-8"&gt;The Unicode Blog, "Emoji Submissions Intake Process Re-opening," 2024年3月. &lt;a href="http://blog.unicode.org/2024/03/emoji-submissions-intake-process-re.html" rel="noopener noreferrer"&gt;http://blog.unicode.org/2024/03/emoji-submissions-intake-process-re.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-9"&gt;Jennifer Daniel, "Emoji submissions re-opening," Substack. &lt;a href="https://jenniferdaniel.substack.com/p/emoji-submissions-re-opening-april" rel="noopener noreferrer"&gt;https://jenniferdaniel.substack.com/p/emoji-submissions-re-opening-april&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-10"&gt;Charlotte Buff, "Rejected Emoji Proposals." &lt;a href="https://charlottebuff.com/unicode/misc/rejected-emoji-proposals/" rel="noopener noreferrer"&gt;https://charlottebuff.com/unicode/misc/rejected-emoji-proposals/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-11"&gt;The Unicode Consortium, "Archive of Notices of Non-Approval." &lt;a href="https://www.unicode.org/alloc/nonapprovals.html" rel="noopener noreferrer"&gt;https://www.unicode.org/alloc/nonapprovals.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-12"&gt;Emojipedia Blog, "What's New In Unicode 17.0." &lt;a href="https://blog.emojipedia.org/whats-new-in-unicode-17-0/" rel="noopener noreferrer"&gt;https://blog.emojipedia.org/whats-new-in-unicode-17-0/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-13"&gt;The Unicode Consortium, "Submitting Emoji Proposals." &lt;a href="https://unicode.org/emoji/proposals.html" rel="noopener noreferrer"&gt;https://unicode.org/emoji/proposals.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-14"&gt;Emojipedia Blog, "Google Debuts Emoji 17.0 Support." &lt;a href="https://blog.emojipedia.org/google-debuts-emoji-17-0-support/" rel="noopener noreferrer"&gt;https://blog.emojipedia.org/google-debuts-emoji-17-0-support/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-15"&gt;Lukas Ewert, "Make Orcas an Emoji," Change.org, 2020年9月. &lt;a href="https://www.change.org/p/apple-make-orcas-an-emoji" rel="noopener noreferrer"&gt;https://www.change.org/p/apple-make-orcas-an-emoji&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-16"&gt;The Unicode Consortium, "PRI #515: Unicode Emoji 17.0 Alpha Repertoire." &lt;a href="https://www.unicode.org/review/pri515/" rel="noopener noreferrer"&gt;https://www.unicode.org/review/pri515/&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-17"&gt;The Unicode Consortium, "Unicode Character Encoding Stability Policies." &lt;a href="https://www.unicode.org/policies/stability_policy.html" rel="noopener noreferrer"&gt;https://www.unicode.org/policies/stability_policy.html&lt;/a&gt;
&lt;/li&gt;
&lt;li id="ref-18"&gt;Jennifer Daniel, "Breaking the Cycle," The Unicode Blog, 2024年3月8日（初出: 2023年1月17日）. &lt;a href="http://blog.unicode.org/2024/03/breaking-cycle.html" rel="noopener noreferrer"&gt;http://blog.unicode.org/2024/03/breaking-cycle.html&lt;/a&gt;
&lt;/li&gt;
&lt;/ol&gt;

</description>
      <category>unicode</category>
      <category>emoji</category>
      <category>macos</category>
      <category>orca</category>
    </item>
  </channel>
</rss>
